Re: [Python-Dev] How far to go with user-friendliness

2015-07-20 Thread Ron Adam



On 07/20/2015 03:32 AM, Florian Bruhin wrote:

* Ron Adamron3...@gmail.com  [2015-07-19 18:06:22 -0400]:



On 07/19/2015 02:33 PM, Florian Bruhin wrote:

 * Ron Adamron3...@gmail.com   [2015-07-19 11:17:10 -0400]:

 I had to look at the source to figure out what this thread was really all
 about.


And it seems I don't quite get it still, but I am trying.

No worries - I'll try to clarify until things are clear :)


Thanks,  :-)


 Basically it looks to me the purpose of adding assret is to add an alias
 check for unsafe methods.  It doesn't actually add an alias.  It 
allows
 a developer to use a valid alias to avoid conflicting with methods starting
 with assert that will still work with the mock module.
 
 The mock module says that when unsafe flag is set to True, it will not
 raise AssertionError for methods beginning with assert and assret.  It
 doesn't specify what unsafe means, and why you would want to do that.
 
 So first do this...
 
  * Document unsafe in mock module.


I still think documenting the purpose of unsafe, rather than just the
effect it has is important.

 From the confusion in this thread, (including my own), it's clear the
documentation does need some attention.


There are only two places where it mentions unsafe in the docs...

The class signature...


class unittest.mock.Mock(spec=None, side_effect=None, return_value=DEFAULT,
wraps=None, name=None, spec_set=None, unsafe=False, **kwargs)



And in the section below that.


unsafe: By default if any attribute starts with assert or assret will raise
an AttributeError. Passing unsafe=True will allow access to these
attributes.


But that doesn't explain the purpose or why these are unsafe or why it
should throw an AttributeError.   Or what to do with that AttributeError.



It's unsafe because tests which:

1) are using the assert_* methods of a mock, and
2) where the programmer did a typo (assert_called() instead of
assert_called_with() for example)

do silently pass.


And further down, you say...


Compare it with the behavior of a normal object - if you call a method
which doesn't exist, it raises AttributeError.

This isn't possible with Mock objects, as they are designed to support
*any*  call, and you can check the calls have happened after the fact.



And the docs say...


spec: This can be either a list of strings or an existing object (a class 
or instance) that acts as the specification for the mock object. If you 
pass in an object then a list of strings is formed by calling dir on the 
object (excluding unsupported magic attributes and methods). Accessing any 
attribute not in this list will raise an AttributeError.



So calling a method that doesn't exist on a mocked object will raise an 
AttributeError if it is given a spec.


But if you don't give it a spec, then a mispelling of *any* method will 
pass silently.  So it's not a problem limited to assert methods.


It seems the obvious and best solution is to always use a spec.




It's not clear why getting an AttributeError for methods beginning with
assert is needed, and how that exception is to be used.   (Should it Bubble
up, or should it be caught and handled?)



Note the discussion*isn't*  about the fact that assert-methods should
raise AttributeError! The patch also does the same with assret.

At least if I understand things correctly, the discussion is whether
*only*  assert* should be handled as a typo, or assret* too.


Both of these are new in 3.5.  And they are related to each other.  So yes, 
they do need to be looked at together in order to understand the problem 
being discussed.



The exception should bubble up, as the whole point of it is to tell
you you did a typo and your test is broken.


I think this is too simple of an explanation.  That could be true for any 
method or attribute call.


 m = Mock(spec=[assert_me, call_me])
 m.call_me()
Mock name='mock.call_me()' id='140590283081488'

 m.all_me()
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/media/hda2/home/ra/Dev/python-dev/python3.5/cpython-master/Lib/unittest/mock.py, 
line 578, in __getattr__

raise AttributeError(Mock object has no attribute %r % name)
AttributeError: Mock object has no attribute 'all_me'

It does raise AttributeError's on missing methods if a spec is given.  So 
catching mispelled methods in tests is only a problem if you don't use a 
spec.  (And not limited to assert methods)



 m.assert_me()
Traceback (most recent call last):
  File stdin, line 1, in module
  File 
/media/hda2/home/ra/Dev/python-dev/python3.5/cpython-master/Lib/unittest/mock.py, 
line 583, in __getattr__

raise AttributeError(name)
AttributeError: assert_me


Why is AttributeError raised here?  Why are methods beginning with assert 
special?  (or unsafe)


Cheers,
   Ron




















___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 

Re: [Python-Dev] How far to go with user-friendliness

2015-07-20 Thread Ron Adam



On 07/20/2015 01:35 PM, Florian Bruhin wrote:

 m.assert_me()

Traceback (most recent call last):
   File stdin, line 1, in module
   File 
/media/hda2/home/ra/Dev/python-dev/python3.5/cpython-master/Lib/unittest/mock.py,
line 583, in __getattr__
 raise AttributeError(name)
AttributeError: assert_me


Why is AttributeError raised here?  Why are methods beginning with assert
special?  (or unsafe)

Some things I can think of:

- It's more likely that you use assert_called() instead of
   assert_called_with() accidentally than that you do a typo in your
   code under test.

- If you do a typo in your code under test, a linter is more likely to
   find it than with mocks, because of their nature.

- Other tests (even if they're manual ones) should probably discover
   the typo in your real code. The always-passing assert won't.


But it doesn't always pass in the above.  It would fail if there is a typo. 
 You would need to make the same typo in the test and the mock the test uses.


With this, the only way to use methods beginning with assert is to set 
unsafe=True.  Then none of this applies.  And after a few times, it's 
likely a developer will get into the habit of doing that.  I wonder if it's 
going to do what the intent was?


Another reason mentioned by Nick, is to avoid shadowing mocks own methods. 
 But I would think a different exception could be used with an explicit 
error message regarding the exact reason would be better, by actually 
detecting the attribute shadows specific mock methods when the mock object 
is created rather than raising an AttributeError when it's called.


I'm thinking that would have been enough to find most of the errors 
including the use of assret* in the motivating case.   Which would of 
been a matter of just correcting the spelling.


assret* would have raised an Attribute error if it was spelled that way 
in the test, or assert* would have raised an Attribute error if it was 
spelled assret* in the object used in the test.


Do these seem accurate to you?


The problems seem to only occur be when no spec is given.

Blocking use of assert* when a spec is given is more than needed, but 
detecting shadowing of needed mock methods is needed.  That can be done 
sooner with better a better error message I think.


Using assret* in the objects being tested will never shadow a mock 
method.  So it seems it's a case of catching a mispelling in a test calling 
a non-existing method.  (But anyother spelling would not be caught.)



I think I'm starting to get it now,
   Ron























___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-19 Thread Ron Adam



On 07/19/2015 02:33 PM, Florian Bruhin wrote:

* Ron Adamron3...@gmail.com  [2015-07-19 11:17:10 -0400]:

I had to look at the source to figure out what this thread was really all
about.


And it seems I don't quite get it still, but I am trying.



Basically it looks to me the purpose of adding assret is to add an alias
check for unsafe methods.  It doesn't actually add an alias.  It allows
a developer to use a valid alias to avoid conflicting with methods starting
with assert that will still work with the mock module.

The mock module says that when unsafe flag is set to True, it will not
raise AssertionError for methods beginning with assert and assret.  It
doesn't specify what unsafe means, and why you would want to do that.

So first do this...

 * Document unsafe in mock module.


I still think documenting the purpose of unsafe, rather than just the 
effect it has is important.


From the confusion in this thread, (including my own), it's clear the 
documentation does need some attention.



There are only two places where it mentions unsafe in the docs...

The class signature...


class unittest.mock.Mock(spec=None, side_effect=None, return_value=DEFAULT, 
wraps=None, name=None, spec_set=None, unsafe=False, **kwargs)




And in the section below that.


unsafe: By default if any attribute starts with assert or assret will raise 
an AttributeError. Passing unsafe=True will allow access to these attributes.



But that doesn't explain the purpose or why these are unsafe or why it 
should throw an AttributeError.   Or what to do with that AttributeError.




But if you do a typo, the test silently doesn't fail (because it's returning a
new mock instead of checking if it has been called):


Do you mean silently passes?



  m.assert_called()
 Mock name='mock.assert_called()' id='140277467876152'

With the patch, everything starting with assert or assret (e.g. my
example above) raises an AttributeError so these kind of issues can't
happen.


I get that part.  It's checking for a naming convention for some purpose.

What purpose?

   To raise an AttributeError  ... but why?



The thing people are discussing about is whether this should also
happen if you do m.assret_called_with() (i.e. if this is a common
enough typo to warrant a special exception), or if*only*  things
starting with assert_* are checked.

The merged patch also treats assret_* as a typo, and some people
think this is a wart/unnecessary/harmful and only assert_* should
raise an AttributionError, i.e. the patch should be amended/reverted.


I think it would be easy enough to revert, It' just a few line in the 
source and docs.  No big deal there.


It's not clear why getting an AttributeError for methods beginning with 
assert is needed, and how that exception is to be used.   (Should it Bubble 
up, or should it be caught and handled?)


Is it just a way to prevent mocks of mocks?  Maybe there is a better way of 
doing that?





If I've still not quite got the gist of this issue, the please correct me.

This has nothing to do with the assert*keyword*  or optimization -
only with the behaviour of mock and its assert_* methods.

I hope this clears things up!


It clears up that it's not associated with the assert keyword.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-19 Thread Ron Adam



On 07/16/2015 07:48 PM, Antoine Pitrou wrote:

On Fri, 17 Jul 2015 11:35:53 +1200
Alexanderxr.li...@gmail.com  wrote:


I do not want to read mistyped code from other developers and try to
guess whether it will work properly or not.



You don't have to guess anything. If it's mistyped, either it raises
AttributeError (because it starts with assert_), or it doesn't do
anything. So, in both cases, it*doesn't*  work properly.


I had to look at the source to figure out what this thread was really all 
about.


Basically it looks to me the purpose of adding assret is to add an alias 
check for unsafe methods.  It doesn't actually add an alias.  It 
allows a developer to use a valid alias to avoid conflicting with methods 
starting with assert that will still work with the mock module.


The mock module says that when unsafe flag is set to True, it will not 
raise AssertionError for methods beginning with assert and assret.  It 
doesn't specify what unsafe means, and why you would want to do that.


So first do this...

* Document unsafe in mock module.

I presume unsafe in this case means the method will not fail if an 
optimise flag is used because an assert in an assert method will not fail.


The problem I see is checking for assert by name convention is dubious to 
start with.  An method that begins with assert may not actually use 
assert, and one's that don't may possibly use it.


A better way is to have a function in the inspect module that will return 
True if a function uses the assert keyword.


That is trickier than it sounds, because the optimize flag causes the 
asserts to be removed.  So it may require setting a flag on a function if 
it's source contained assert.


With a reliable test for assert, the check for an naming convention alias 
is not needed.



If I've still not quite got the gist of this issue, the please correct me.

Cheers,
Ron






















___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-19 Thread Ron Adam



On 07/19/2015 11:52 AM, Ethan Furman wrote:

Seems to me a lot of fuss could have been avoided by just acknowledging
that a mistake may have been made, and asking for patches if anybody cared
enough about it.


I'm not sure it's a mistake, but it may not be the best way to do what the 
alias check does.   That is, check for unsafe methods that may use 
assert in methods that start with assert or assret.  It's a name 
convention check only.


The use of assret may be because a developer used it in place of assert 
for a large project to avoid overwriting inherited methods accidentally and 
asked for it.  (that was suggested in this thread at one point.)  But I'm 
not able to confirm that.  It does sound reasonable though.  The check for 
it doesn't auto correct anything or alter anything outside of how the mock 
responds to existing methods. So it's not as bad as it sounds.  (But not as 
good either.)


A possibly better alternative is to have a different way to check if 
functions and methods use assert.  Then the check by name convention 
(which is not dependable anyway) isn't needed.


Possibly adding a function, uses_assert(...), to the inspect module would 
be good.   To allow that to work, may need a flag set on function objects 
if they contain assert even if the module is compiled in optimise mode. 
(Is it doable?  Or maybe there is another way?)


Cheers,
   Ron







___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Ron Adam



On 07/14/2015 09:41 AM, Steven D'Aprano wrote:

On Tue, Jul 14, 2015 at 02:06:14PM +0200, Dima Tisnek wrote:

https://bugs.python.org/issue21238  introduces detection of
missing/misspelt mock.assert_xxx() calls on getattr level in Python
3.5

Michael and Kushal are of the opinion that assret is a common typo
of assert and should be supported in a sense that it also triggers
AttributeError and is not silently ignored like a mocked user
attribute.

I disagree

I must admit I don't use mock so don't quite understand what is going on
in this bug report. But I don't imagine that anything good will come out
of treating*one*  typo differently from all the other possible typos.
Why should assret be treated differently from other easy-to-make typos
like asert, assrt, asset? Or assort, which is not only a
standard and common English word, but e and o are right next to each
other on Dvorak keyboards, making it an easy typo to make.

Surely this is an obvious case where the Zen should apply. Special
cases aren't special enough... -- either all such typos raise
AttributeError, or they are all silent.


I agree with Steven that it doesn't seem correct to not raise 
AttributeError here.


For what it's worth, I have a life long sleep disorder and am a tarrable 
(-- like this)  speller because of it.   I still don't want spell, or 
grammar, checkers to not report my mistakes.  And I don't recall ever 
making the particular error of using assret in place of assert.  I'd be 
more likely to mispell it as assirt if I wasn't already so familiar with 
assert.


If people do misspell it, I think they do learn not to in after it happens 
a few times.


Regards,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-14 Thread Ron Adam



On 07/14/2015 12:36 PM, Christie Wilson wrote:

If people do misspell it, I think they do learn not to after it
happens a few times.


Unless the line silently executes and they don't notice the mistake for
years :'(


Yes, and I'm concerned that allowing it in one location may bring about a 
self fulfilling cause.  ie.. it will become a common error if it works in 
some places, but not others, while it's really not that common at the 
present time.


On the plus side.. having it in only one module, and only one place, 
probably won't be that bad or even that noticeable.  But maybe it can be 
a motivating factor for not doing similar things in other places.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Importance of async keyword

2015-06-26 Thread Ron Adam



On 06/26/2015 10:31 AM, Chris Angelico wrote:

Apologies if this is a really REALLY dumb question, but... How hard
would it be to then dispense with the await keyword, and simply
_always_  behave that way? Something like:

def data_from_socket():
 # Other tasks may run while we wait for data
 # The socket.read() function has yield points in it
 data = socket.read(1024, 1)
 return transmogrify(data)

def respond_to_socket():
 while True:
 data = data_from_socket()
 # We can pretend that socket writes happen instantly,
 # but if ever we can't write, it'll let other tasks wait while
 # we're blocked on it.
 socket.write(Got it, next please!)

Do these functions really need to be aware that there are yield points
in what they're calling?


I think yield points is a concept that needs to be spelled out a bit 
clearer in the PEP 492.


It seems that those points are defined by other means outside of a function 
defined with async def.  From the PEP...


   * It is a SyntaxError to have yield or yield from expressions
 in an async function.

So somewhere in an async function, it needs to await something with a 
yield in it that isn't an async function.


This seems to be a bit counter intuitive to me.  Or am I missing something?

Regards,
   Ron




















___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unbound locals in class scopes

2015-06-20 Thread Ron Adam



On 06/20/2015 12:12 PM, Ron Adam wrote:



On 06/20/2015 02:51 AM, Ivan Levkivskyi wrote:



Guido said 13 years ago that this behavior should not be changed:
https://mail.python.org/pipermail/python-dev/2002-April/023428.html,
however, things changed a bit in Python 3.4 with the introduction of the
LOAD_CLASSDEREF opcode. I just wanted to double-check whether it is still a
desired/expected behavior.



Guido's comment still stands as far as references inside methods work in
regards to the class body. (they must use a self name to access the class
name space.) But the execution of the class body does use lexical scope,
otherwise it would print xtop instead of xlocal here.


Minor corrections:

Methods can access but not write to the class scope without using self.  So 
that is also equivalent to the function version using type().  The methods 
capture the closure they were defined in, which is interesting.


And the self name refers to the object's names space not the class name space.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Unbound locals in class scopes

2015-06-20 Thread Ron Adam



On 06/20/2015 02:51 AM, Ivan Levkivskyi wrote:

Hello,

There appeared a question in the discussion on
http://bugs.python.org/issue24129 about documenting the behavior that
unbound local variables in a class definition do not follow the normal rules.

Guido said 13 years ago that this behavior should not be changed:
https://mail.python.org/pipermail/python-dev/2002-April/023428.html,
however, things changed a bit in Python 3.4 with the introduction of the
LOAD_CLASSDEREF opcode. I just wanted to double-check whether it is still a
desired/expected behavior.



Guido's comment still stands as far as references inside methods work in 
regards to the class body. (they must use a self name to access the class 
name space.) But the execution of the class body does use lexical scope, 
otherwise it would print xtop instead of xlocal here.


x = xtop
y = ytop
def func():
x = xlocal
y = ylocal
class C:
print(x)
print(y)
y = 1
func()

prints

xlocal
ytop


Maybe a better way to put this is, should the above be the same as this?

 x = xtop
 y = ytop
 def func():
... x = xlocal
... y = ylocal
... def _C():
... print(x)
... print(y)
... y = 1
... return locals()
... C = type(C, (), _C())
...
 func()
xlocal
Traceback (most recent call last):
  File stdin, line 1, in module
  File stdin, line 9, in func
  File stdin, line 6, in _C
UnboundLocalError: local variable 'y' referenced before assignment

I think yes, but I'm not sure how it may be different in other ways.


Cheers,
   Ron


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 quibble and request

2015-05-07 Thread Ron Adam



On 05/03/2015 03:03 PM, Arnaud Delobelle wrote:

On 3 May 2015 at 02:22, Greg Ewinggreg.ew...@canterbury.ac.nz  wrote:

Guido van Rossum wrote:


On Sat, May 2, 2015 at 1:18 PM, Arnaud Delobelle arno...@gmail.com
mailto:arno...@gmail.com wrote:

 Does this mean that
 somehow await x guarantees that the coroutine will suspend at least
 once?



No. First, it's possible for x to finish without yielding.
But even if x yields, there is no guarantee that the
scheduler will run something else -- it might just
resume the same task, even if there is another one that
could run. It's up to the scheduler whether it
implements any kind of fair scheduling policy.

That's what I understood but the example ('yielding()') provided by
Ron Adam seemed to imply otherwise, so I wanted to clarify.



Guido is correct of course.  In examples I've used before with trampolines, 
a co-routine would be yielded back to the event loop, and if there was any 
other co-routines in the event loop they would execute first.  I'm not sure 
if async and await can be used with a trampoline type scheduler.


A scheduler might use a timer or priority based system system to schedule 
events.  So yes, it's up to the scheduler and the pep492 is intended to be 
flexible as to what scheduler is used.


Cheers,
   Ron








___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 492 quibble and request

2015-05-02 Thread Ron Adam

On 05/02/2015 04:18 PM, Arnaud Delobelle wrote:

On 1 May 2015 at 20:59, Guido van Rossumgu...@python.org  wrote:

On Fri, May 1, 2015 at 12:49 PM, Ron Adamron3...@gmail.com  wrote:



Another useful async function might be...

async def yielding():
pass

In a routine is taking very long time, just inserting await yielding()
in the long calculation would let other awaitables run.


That's really up to the scheduler, and a function like this should be
provided by the event loop or scheduler framework you're using.

Really?  I was under the impression that 'await yielding()' as defined
above would actually not suspend the coroutine at all, therefore not
giving any opportunity for the scheduler to resume another coroutine,
and I thought I understood the PEP well enough.  Does this mean that
somehow await x guarantees that the coroutine will suspend at least
once?

To me the async def above was the equivalent of the following in the
'yield from' world:

def yielding():
 return
 yield # Just to make it a generator

Then yield from yielding() will not yield at all - which makes its
name rather misleading!


It was my understanding that yield from also suspends the current thread, 
allowing others to run.  Of course if it's the only thread, it would not.  
But maybe I'm misremembering earlier discussions.  If it doesn't suspend 
the current thread, then you need to put scheduler.sleep() calls throughout 
your co-routines.


I think Guido is saying it could be either.

Cheers,
   Ron




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-07 Thread Ron Adam



On 03/07/2015 04:58 AM, Steven D'Aprano wrote:

On Fri, Mar 06, 2015 at 08:00:20PM -0500, Ron Adam wrote:


Have you considered doing this by having different magic numbers in the
.pyc file for standard, -O, and -O0 compiled bytecode files?  Python
already checks that number and recompiles the files if it's not what it's
expected to be.  And it wouldn't require any naming conventions or new
cache directories.  It seems to me it would be much easier to do as well.

And it would fail to solve the problem. The problem isn't just that the
.pyo file can contain the wrong byte-code for the optimization level,
that's only part of the problem. Another issue is that you cannot have
pre-compiled byte-code for multiple different optimization levels. You
can have a no optimization byte-code file, the .pyc file, but only one
optimized byte-code file at the same time.

Brett's proposal will allow -O optimized and -OO optimized byte-code
files to co-exist, as well as setting up a clear naming convention for
future optimizers in either the Python compiler or third-party
optimizers.


So all the different versions can be generated ahead of time. I think that 
is the main difference.


My suggestion would cause a recompile of all dependent python files when 
different optimisation levels are used in different projects. Which may be 
worse than not generating bytecode files at all.  OK



A few questions...

Can a submodule use an optimazation level that is different from the file 
that imports it?   (Other than the case this is trying to solve.)


Is there way to specify that an imported module not use any optimisation 
level, or to always use a specific optimisation level?


Is there a way to run tests with all the different optimisation levels?


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 488: elimination of PYO files

2015-03-06 Thread Ron Adam



On 03/06/2015 11:34 AM, Brett Cannon wrote:


There are currently two open issues, although one is purely a bikeshed
topic on formatting of file names so I don't really consider it open for
change from what is proposed in the PEP without Guido saying he hates my
preference or someone having a really good argument for some alternative.
The second open issue on the common case file name is something to
reasonably debate and come to consensus on.

===

PEP: 488
Title: Elimination of PYO files


+1

And...

Have you considered doing this by having different magic numbers in the 
.pyc file for standard, -O, and -O0 compiled bytecode files?  Python 
already checks that number and recompiles the files if it's not what it's 
expected to be.  And it wouldn't require any naming conventions or new 
cache directories.  It seems to me it would be much easier to do as well.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-23 Thread Ron Adam



On 11/23/2014 04:08 AM, Terry Reedy wrote:

On 11/22/2014 5:23 PM, Chris Angelico wrote:

On Sun, Nov 23, 2014 at 8:03 AM, Ron Adam ron3...@gmail.com wrote:

Making comprehensions work more like generator expressions
would, IMO, imply making the same change to all for loops: having a
StopIteration raised by the body of the loop quietly terminate the
loop.



I'm not suggesting making any changes to generator expressions or for loops
at all.  They would continue to work like they currently do.


But if you're suggesting making list comps react to StopIteration
raised by their primary expressions, then to maintain the
correspondence between a list comp and its expanded form, for loops
would have to change too. Or should that correspondence be broken, in
that single-expression loop constructs become semantically different
from statement loop constructs?


The 2.x correspondence between list comp and for loops was *already broken*
in 3.0 by moving execution to a new function to prevent name binding in
comprehension from leaking the way they did in 2.x.  We did not change for
loops to match.

The following, which is an example of equivalence in 2.7, raises in 3.4
because 'itertools' is not defined.

def binder(ns, name, ob):
 ns[name] = ob

for mod in ['sys']:
 binder(locals(), mod, __import__(mod))
print(sys)

[binder(locals(), mod, __import__(mod)) for mod in ['itertools']]
print(itertools)

Ceasing to leak *any* local bindings in 3.0 broke equivalence and code
depending on such bindings.  Ceasing to leak StopIteration would break
equivalence a bit more, but only a bit, and code dependent on such leakage,
which I expect is extremely rare.


I think they would be rare too.

With the passage of the PEP, it will change what is different about them 
once it's in full effect.  The stop hack won't work in both, and you may 
get a RuntimeError in generator expressions where you would get 
StopIteration in list-comps.  (Is this correct?)



Cheers,
Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-23 Thread Ron Adam



On 11/23/2014 04:15 PM, Chris Angelico wrote:

On Mon, Nov 24, 2014 at 5:28 AM, Ron Adamron3...@gmail.com  wrote:

With the passage of the PEP, it will change what is different about them
once it's in full effect.  The stop hack won't work in both, and you may get
a RuntimeError in generator expressions where you would get StopIteration in
list-comps.  (Is this correct?)

them being list comps and generator expressions?


Yes


The stop hack won't work in either (currently it does work in
genexps), but you'd get a different exception type if you attempt it.
This is correct. It's broadly similar to this distinction:


{1:2,3:4}[50]

Traceback (most recent call last):
   File stdin, line 1, in module
KeyError: 50

[1,2,3,4][50]

Traceback (most recent call last):
   File stdin, line 1, in module
IndexError: list index out of range


Comprehensions insert/append items, so you wouldn't get those unless you 
are reading from another object and would bubble out of generators too. 
Which is good because it's most likely an error that needs fixing.



In both lists and dicts, you can't look up something that isn't there.
But you get a slightly different exception type (granted, these two do
have a common superclass) depending on the exact context. But the
behaviour will be effectively the same.


I think the difference is very specific to StopIteration.

And I think we need to wait and see what happens in real code.

Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-22 Thread Ron Adam



On 11/22/2014 08:31 AM, Nick Coghlan wrote:


On 22 Nov 2014 02:51, Antoine Pitrou solip...@pitrou.net
mailto:solip...@pitrou.net wrote:
 
  On Fri, 21 Nov 2014 05:47:58 -0800
  Raymond Hettinger raymond.hettin...@gmail.com
mailto:raymond.hettin...@gmail.com wrote:
  
   Another issue is that it breaks the way I and others have taught for
years that generators are a kind of iterator (an object implementing the
iterator protocol) and that a primary motivation for generators is to
provide a simpler and more direct way of creating iterators.  However,
Chris explained that, This proposal causes a separation of generators and
iterators, so it's no longer possible to pretend that they're the same
thing.  That is a major and worrisome conceptual shift.
 
  I agree with Raymond on this point.

A particularly relevant variant of the idiom is the approach of writing
__iter__ directly as a generator, rather than creating a separate custom
iterator class. In that context, the similarities between the __iter__
implementation and the corresponding explicit __next__ implementation is a
beneficial feature.

I'm definitely coming around to the point of view that, even if we wouldn't
design it the way it currently works given a blank slate, the alternative
design doesn't provide sufficient benefit to justify the cost of changing
the behaviour  getting people to retrain their brains.


This all seems more complex than it should be to me.  The way I tend to 
think about it is simply for loops in one form or another, catch 
StopIteration.  So if you iterate an iterator manually rather than using it 
as a for iterator, you need to catch StopIteration.


If we write a function to act as an iterator, like __next__, we need to 
raise StopIteration from somewhere in it, or let one bubble out from a 
generator if we are manually iterating it on each call... next(g). It's 
possible we may need to do either or both conditionally.  That could mean 
we need to think about refactoring some part of a program, but it doesn't 
mean Python needs to be fixed.


So the lines that split them isn't quite as clear cut as it may seem they 
should be.  That may just be a misplaced ideal.


Any time a StopIteration is raised.. either manually with raise, or at 
the end of a generator, it should bubble up until a for loop iterating 
over that bit of code, catches it, or a try-except catches it, or fail 
loudly.  I think it does do this in normal generators, so I don't see an 
issue with how StopIteration works.



Which gets us back to generator expressions and comprehensions.

Let me know if I got some part of this wrong...   :-)

Comprehensions are used as a convenient way to create an object.  The 
expression parts of the comprehension define the *body* of a loop, so a 
StopIteration raised in it will bubble out.  As it would in any other case 
where it is raised in the body of a loop.


Generator exprssions on the other hand define the *iterator* to be used in 
a for loop. A StopIteration raised in it is caught by the for loop.


So they both work as they are designed, but they look so similar, it looks 
like one is broken.




It looks to me that there are three options...


OPTION 1:

Make comprehensions act more like generator expressions.

It would mean a while loop in the object creation point is converted to a 
for loop. (or something equivalent.)


Then both a comprehension and a generator expressions can be viewed as 
defining iterators, with the same behaviour, rather than comprehensions 
defining the body of the loop, which has the different but valid behaviour 
of StopIteration escaping.


This would make explaining them a *lot* easier as they become the same 
thing used in a different context, rather than two different things used in 
what appears to be similar contexts.



I think this fits with what Guido wants, but does so in a narrower scope, 
only effecting comprehensions.  StopIteration is less likely to leak out.


But it also allows the use of the stop() hack to raise StopIteration in 
comprehensions and terminate them early.  Currently it doesn't work as it 
does in generator expressions.


If the stop() hack works in both comprehensions and generator expressions 
the same way, then maybe we can view it as less of a hack.



OPTION 2:

Make generator expressions more like comprehensions.

This would mean StopIteration would bubble out of generator expressions as 
the person who posted the original topic on python ideas wanted.  And the 
stop hack would no longer work.


Both generator expressions and comprehensions could be viewed as supplying 
the body in a loop.  This is inconsistent with defining generators that act 
as iterators.  So I'm definitely -1 on this option.



OPTION 3:

Document the differences better.




Cheers,
   Ron






















___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 

Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-22 Thread Ron Adam



On 11/22/2014 02:16 PM, Chris Angelico wrote:

On Sun, Nov 23, 2014 at 6:49 AM, Ron Adamron3...@gmail.com  wrote:


OPTION 1:

Make comprehensions act more like generator expressions.

It would mean a while loop in the object creation point is converted to a
for loop. (or something equivalent.)

Then both a comprehension and a generator expressions can be viewed as
defining iterators, with the same behaviour, rather than comprehensions
defining the body of the loop, which has the different but valid behaviour
of StopIteration escaping.

This would make explaining them a*lot*  easier as they become the same thing
used in a different context, rather than two different things used in what
appears to be similar contexts.


I think this fits with what Guido wants, but does so in a narrower scope,
only effecting comprehensions.  StopIteration is less likely to leak out.

A list comp is usually compared to a statement-form loop.

def stop(): raise StopIteration
lst = [stop() for x in (1,2,3)]



lst = []
for x in (1,2,3):
 lst.append(stop())

At the moment, these are identical (virtually; the pedantic will point
out that 'x' doesn't leak out of the comprehension) - in each case,
the exception raised by the body will bubble up and be printed to the
console.


This is what I meant by leaking out.  A StopIteration bubbles up.  And your 
examples match my understanding. :-)






But a generator expression is different:


Yes, but they work as I expect them to.




lst = list(stop() for x in (1,2,3))

In this form, lst is an empty list, and nothing is printed to the
console.


I think that is what it should do.  I think of it this way...

 def stop(): raise StopIteration
...
 def ge():
...for value in (stop() for x in (1,2,3)):
...   yield value
...
 list(ge())
[]

Notice the entire body of the comprehension is in the for loop header, and 
no parts of it are in the body except the reference to the already assigned 
value.


The StopIteration is caught by the outer for loop.  Not the for loop in the 
generator expression, or iterator part.




Making comprehensions work more like generator expressions
would, IMO, imply making the same change to all for loops: having a
StopIteration raised by the body of the loop quietly terminate the
loop.


I'm not suggesting making any changes to generator expressions or for loops 
at all.  They would continue to work like they currently do.



Cheers,
   Ron



This is backwards. Most of Python is about catching exceptions as
narrowly as possible: you make your try blocks small, you use
specific exception types rather than bare except: clauses, etc, etc,
etc. You do your best to catch ONLY those exceptions that you truly
understand, unless you're writing a catch, log, and reraise or
catch, log, and go back for more work generic handler. A 'for' loop
currently is written more-or-less like this:

for var in expr:
 body

it = iter(expr)
while True:
 try: var = next(it)
 except StopIteration: break
 body

But this suggestion of yours would make it become:
it = iter(expr)
while True:
 try:
 var = next(it)
 body
 except StopIteration: break



I believe this to be the wrong direction to make the change. Instead,
generator expressions should be the ones to change, because that's a
narrowing of exception-catching scope. Currently, every generator
function is implicitly guarded by try: .. except StopIteration:
pass. This is allowing errors to pass silently, to allow a hack
involving non-local control flow (letting a chained function call
terminate a generator) - violating the Zen of Python.

ChrisA


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-22 Thread Ron Adam



On 11/22/2014 03:01 PM, Terry Reedy wrote:



Then both a comprehension and a generator expressions can be viewed as
defining iterators,


A comprehension is not an iterator.  The above would make a list or set
comprehension the same as feeding a genexp to list() or set().


Correct, but we could think and reason about them in the same way.

Cheers,
   Ron


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-22 Thread Ron Adam



On 11/22/2014 04:23 PM, Chris Angelico wrote:

On Sun, Nov 23, 2014 at 8:03 AM, Ron Adamron3...@gmail.com  wrote:

Making comprehensions work more like generator expressions
would, IMO, imply making the same change to all for loops: having a
StopIteration raised by the body of the loop quietly terminate the
loop.



I'm not suggesting making any changes to generator expressions or for loops
at all.  They would continue to work like they currently do.



But if you're suggesting making list comps react to StopIteration
raised by their primary expressions, then to maintain the
correspondence between a list comp and its expanded form, for loops
would have to change too.
Or should that correspondence be broken, in
that single-expression loop constructs become semantically different
from statement loop constructs?



Se we have these...

 Tuple Comprehension  (...)
 List Comprehension  [...]
 Dict Comprehension  {...}  Colon make's it different from sets.
 Set Comprehension  {...}

I don't think there is any other way to create them. And they don't 
actually expand to any python code that is visible to a programmer.  They 
are self contained literal expressions for creating these few objects.


The expanded form you are referring to is just how we currently explain 
them.  And yes, we will need to alter how we currently explain 
Comprehensions a bit if this is done.



Here is what I think(?) list comps do currently.

 lst = [expr for items in itr if cond]

Is the same as.

 lst = []
 for x in itr:  # StopIteration  caught here.
 if cond:   # StopIteration  bubbles here.
 lst.append(expr)   # StopIteration  bubbles here.



And it would be changed to this...

def comp_expression():
for x in itr:  # StopIteration caught here.
if cond:
   list.append(expr)

# StopIteration from cond and expr caught here.
lst = list(x for x in comp_expression())



So

   [expr for itr if cond]

Becomes a literal form of:

   list(expr for itr if cond)


And I believe that is how they are explained quite often. :-)


(Although It's not currently quite true, is it?)



Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-22 Thread Ron Adam



On 11/22/2014 06:20 PM, Chris Angelico wrote:

On Sun, Nov 23, 2014 at 11:05 AM, Ron Adamron3...@gmail.com  wrote:

Se we have these...

  Tuple Comprehension  (...)
  List Comprehension  [...]
  Dict Comprehension  {...}  Colon make's it different from sets.
  Set Comprehension  {...}

I don't think there is any other way to create them. And they don't actually
expand to any python code that is visible to a programmer.  They are self
contained literal expressions for creating these few objects.

Hmmm, there's no such thing as tuple comprehensions.


Just didn't think it through quite well enough.  But you are correct, that 
would be a generator expression.


One less case to worry about. :-)



lst = [1,2,3,4] # not a comprehension
even = [n*2 for n in lst] # comprehension


Here is what I think(?) list comps do currently.

  lst = [expr for items in itr if cond]

Is the same as.

  lst = []
  for x in itr:  # StopIteration  caught here.
  if cond:   # StopIteration  bubbles here.
  lst.append(expr)   # StopIteration  bubbles here.



Pretty much. It's done in a nested function (so x doesn't leak), but
otherwise, yes.



And it would be changed to this...

 def comp_expression():
 for x in itr:  # StopIteration caught here.
 if cond:
list.append(expr)

 # StopIteration from cond and expr caught here.
 lst = list(x for x in comp_expression())



That wouldn't quite work, but this would:


Right, the list.append() should be a yield(expr).



def listcomp():
 ret = []
 try:
 for x in itr:
 if cond:
 ret.append(x)
 except StopIteration:
 pass
 return ret
lst = listcomp()

(And yes, the name's syntactically illegal, but if you look at a
traceback, that's what is used.)


That's fine too.

The real question is how much breakage would such a change make?  That will 
probably need a patch in order to test it out.


Cheers,
   Ron








___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 479: Change StopIteration handling inside generators

2014-11-22 Thread Ron Adam



On 11/22/2014 07:06 PM, Chris Angelico wrote:

On Sun, Nov 23, 2014 at 11:51 AM, Ron Adamron3...@gmail.com  wrote:



On 11/22/2014 06:20 PM, Chris Angelico wrote:


Hmmm, there's no such thing as tuple comprehensions.


Just didn't think it through quite well enough.  But you are correct, that
would be a generator expression.

One less case to worry about.:-)

Ah right, no probs.



 And it would be changed to this...
 
  def comp_expression():
  for x in itr:  # StopIteration caught here.
  if cond:
 list.append(expr)
 
  # StopIteration from cond and expr caught here.
  lst = list(x for x in comp_expression())


Right, the list.append() should be a yield(expr).



In that case, your second generator expression is entirely redundant;
all you want is list(comp_expression()).

Yes, and that is good.  Simplies it even more.


But the example doesn't say
*why*  this version should terminate on a StopIteration raised by expr,
when the statement form would print an exception traceback.


I presume you are asking why do this?  And not why the example does that?

There has been a desires expressed, more than a few times, to make 
comprehensions more like generator expressions in the past.  It looks to me 
that that desire is still true.  I also think there has been quite a bit of 
confusion in these discussions that could be reduced substantially by 
making Comprehensions work a bit more like generator expressions.


As to why the example does that.. a list constructor iterates over a 
generator expression in a way that follows the iterator protocol.  If you 
make comprehsionsons work as if they are a generator expression fed to a 
constructor... then it too should follow the itorator protocol.  Do you agree?




The real question is how much breakage would such a change make?  That will
probably need a patch in order to test it out.

There's one attached here:

http://bugs.python.org/issue22906


Doesn't that patch effect generators and not comprehensions?  If so, it 
wouldn't do what we are talking about here.


Ron


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fwd: Python 2.x and 3.x usage survey

2013-12-31 Thread Ron Adam



On 12/31/2013 04:34 AM, Martin v. Löwis wrote:

So for the Python 4 survey, I propose to have just a single question:

* Have you heard of Python 4?

That will prove that Python 4 is even faster than Python 3:-)
Of course, that is also because it has a JIT compiler, and runs
on 16 cores with no GIL.


If the question is...

   * What is Python 4?

I think you get much more interesting answers.  ;-)

Cheers,
   Ron


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] eval and triple quoted strings

2013-06-17 Thread Ron Adam



On 06/17/2013 12:04 PM, Walter Dörwald wrote:

Making the string raw, of course turns it into:

U+0027: APOSTROPHE
U+0027: APOSTROPHE
U+0027: APOSTROPHE
U+005C: REVERSE SOLIDUS
U+0072: LATIN SMALL LETTER R
U+005C: REVERSE SOLIDUS
U+006E: LATIN SMALL LETTER N
U+0027: APOSTROPHE
U+0027: APOSTROPHE
U+0027: APOSTROPHE

and eval()ing that does indeed give \r\n as expected.


You can also escape the reverse slashes in a regular string to get the same 
result.



 s1 = '''\\r\\n'''
 list(s1)
[', ', ', '\\', 'r', '\\', 'n', ', ', ']

 s2 = eval(s1)
 list(s2)
['\r', '\n']

 s3 = '''%s''' % s2
 list(s3)
[', ', ', '\r', '\n', ', ', ']

 s4 = eval(s3)
 list(s4)
['\n']


When a standard string literal used with eval, it's evaluated first to a 
string object in the same scope as the eval function is called from, then 
the eval function is called with that string object and it's evaluated 
again.  So it's really being parsed twice.  (That's the part that got me.)


The transformation between s1 and s2 is what Phillip is referring to, and
Guido is referring to the transformation from s2 to s4.   (s3 is needed to 
avoid the end of line error of evaluating a single quoted string with \n in 
it.)


When a sting literal is used directly with eval, it looks like it is 
evaluated from s1 to s4 in one step, but that isn't what is happening.


Cheers,  Ron


(ps: Was informed my posts where showing up twice.. hopefully I got that 
fixed now.)















___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] eval and triple quoted strings

2013-06-17 Thread Ron Adam



On 06/17/2013 05:18 PM, Greg Ewing wrote:

I'm still not convinced that this is necessary or desirable
behaviour. I can understand the parser doing this as a
workaround before we had universal newlines, but now that
we do, I'd expect any Python string to already have newlines
converted to their canonical representation, and that any CRs
it contains are meant to be there. The parser shouldn't need
to do newline translation a second time.



It's the other way around.

Eval and exec should generate the same results as pythons compiler with the 
same input, including errors and exceptions.  The only way we can have that 
is if eval and exec parses everything the same way.


It's the first parsing that needs to be avoided or compensated for in these 
cases.  Raw strings (my preference) works for string literals, or you can 
escape the escape codes so they are still individual characters after the 
first translation.  Or read the code directly from a file rather than 
importing it.


For example, if you wrote your own python console program, you would want 
all the errors and exceptions to come from eval, including those for bad 
strings.  You would still need to feed the bad strings to eval.  If you 
don't then you won't get the same output from eval as the compiler does.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] eval and triple quoted strings

2013-06-15 Thread Ron Adam



On 06/14/2013 04:03 PM, PJ Eby wrote:

Should this be the same?


python3 -c 'print(bytes(\r\n, utf8))'
b'\r\n'



eval('print(bytes(\r\n, utf8))')

b'\n'

No, but:

eval(r'print(bytes(\r\n, utf8))')

should be.  (And is.)

What I believe you and Walter are missing is that the \r\n in the eval
strings are converted early if you don't make the enclosing string
raw.  So what you're eval-ing is not what you think you are eval-ing,
hence the confusion.


Yes thanks, seems like an easy mistake to make.

To be clear...

The string to eval is parsed when the eval line is tokenized in the scope 
containing the eval() function.  The eval function then parses the 
resulting string object it receives as it's input.


There is no mention of using raw strings in the docs on evel and exec.  I 
think there should be, because the intention (in most cases) is for eval to 
parse the string, and not for it to be parsed or changed before it's 
evaluated by eval or exec.


An example using a string with escape characters might make it clearer.

Cheers,
   Ron
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] eval and triple quoted strings

2013-06-15 Thread Ron Adam



On 06/15/2013 03:23 PM, Guido van Rossum wrote:

The semantics of raw strings are clear. I don't see that they should be
called out especially in any context. (Except for regexps.) Usually exec()
is not used with a literal anyway (what would be the point).



There are about a hundred instances of eval/exec(some_string_literal) in 
pythons library.  Most of them in the tests, and maybe about half of those 
testing the compiler, eval, and exec.


egrep -owr --include=*.py (eval|exec)\(('.*'|\.*\)\) * | wc -l
114


I have no idea in how many places a string literal is assigned to a name 
first and then used later in eval or exec.  It's harder to grep for but 
would be less than...


egrep -owr --include=*.py (eval|exec)\(.*\) * | wc -l
438

That's overstated because some of those are comments, and some may be 
functions with the name ending with eval or exec.



I do think that eval and exec is a similar case to regexps.  And possibly 
often enough, the string may contain a raw string, regular expression, or a 
file/path name.


Only a short note needed in the docs for eval, nothing more.  And not even 
that if no thinks it's an issue.


cheers,
   Ron









___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] eval and triple quoted strings

2013-06-14 Thread Ron Adam



On 06/14/2013 10:36 AM, Guido van Rossum wrote:

Not a bug. The same is done for file input -- CRLF is changed to LF before
tokenizing.



Should this be the same?


python3 -c 'print(bytes(\r\n, utf8))'
b'\r\n'


 eval('print(bytes(\r\n, utf8))')
b'\n'



Ron




On Jun 14, 2013 8:27 AM, Walter Dörwald wal...@livinglogic.de
mailto:wal...@livinglogic.de wrote:

Hello all!

This surprised me:

 eval('''\r\n''')
'\n'

Where did the \r go? ast.literal_eval() has the same problem:

 ast.literal_eval('''\r\n''')
'\n'

Is this a bug/worth fixing?

Servus,
Walter
_
Python-Dev mailing list
Python-Dev@python.org mailto:Python-Dev@python.org
http://mail.python.org/__mailman/listinfo/python-dev
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/__mailman/options/python-dev/__guido%40python.org 
http://mail.python.org/mailman/options/python-dev/guido%40python.org



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Ordering keyword dicts

2013-06-10 Thread Ron Adam



On 06/09/2013 10:13 PM, Alexander Belopolsky wrote:


On Sun, May 19, 2013 at 1:47 AM, Guido van Rossum gu...@python.org
mailto:gu...@python.org wrote:

I'm slow at warming up to the idea. My main concern is speed -- since
most code doesn't need it and function calls are already slow (and
obviously very common :-) it would be a shame if this slowed down
function calls that don't need it noticeably.



And we could remove this bit from the docs...


The OrderedDict constructor and update() method both accept keyword
arguments, but their order is lost because Python’s function call semantics
pass-in keyword arguments using a regular unordered dictionary.




Here is an idea that will not affect functions that don't need to know the
order of keywords: a special __kworder__ local variable.  The use of this
variable inside the function will signal compiler to generate additional
bytecode to copy keyword names from the stack to a tuple and save it in
__kworder__.


I like the idea of an ordered dict acting the same as a regular dict with 
an optional way to access order.  It also will catch uses of unordered 
dicts in situations where an ordered dict is expected by having the ordered 
key words accessed in a way that isn't available in unordered dicts.



I don't care for a magic local variable inside of a function.



With that feature, an OrderedDict constructor, for example
can be written as



def odict(**kwargs):
   return OrderedDict([(key, kwargs[key]) for key in __kworder__])



There is two situations where this would matter...  The packing of 
arguments when **kwargs is used in the function definition, and the 
unpacking of arguments when **kwargs is used in the function call.


By having the unpackng side also work with ordered dicts, maybe we could 
then replace the many *very common* occurrences of (*args, **kwargs) with 
just (**all_args) or (***all_args).  :)


It's the unordered nature of dictionaries that requires *args to be used 
along with **kwargs when forwarding function arguments to other functions.



Another variation of this is to have a way to forward a functions 
signature name space to another function directly and bypass the 
signature parsing those cases.


Cheers,
   Ron














___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] cpython: Introduce importlib.util.ModuleManager which is a context manager to

2013-05-30 Thread Ron Adam



On 05/30/2013 03:34 AM, Mark Shannon wrote:



On 29/05/13 01:14, Brett Cannon wrote:

On Tue, May 28, 2013 at 5:40 PM, Antoine Pitrou solip...@pitrou.net wrote:

On Tue, 28 May 2013 23:29:46 +0200 (CEST)
brett.cannon python-check...@python.org wrote:


+.. class:: ModuleManager(name)
+
+A :term:`context manager` which provides the module to load. The
module will
+either come from :attr:`sys.modules` in the case of reloading or a
fresh
+module if loading a new module. Proper cleanup of
:attr:`sys.modules` occurs
+if the module was new and an exception was raised.




(FWIW, I think ModuleManager is a rather bad name :-)



+1. XxxManager is what Java programmers call their classes when they are
forced to have an
unnecessary class because they don't have 1st class functions or modules.

(I don't like 'Context Manager' either, but it's too late to change it :( )




I'm open to suggestions, but the thing does manage the module so it at
least makes sense.


But what do you mean by managing? 'Manage' has many meanings.
Once you've answered that question you should have your name.


It manages the context, as in the above reference to context manager.


In this case the context is the loading and unloading of a module.  Having 
context managers names end with manager helps indicate how it's used.


But other verb+er combinations also work.  Taking a hint from the first few 
words of the __doc__ string gives us an obvious alternative.


   ModuleProvider


Cheers,
Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 7 clarification request: braces

2012-01-01 Thread Ron Adam
On Mon, 2012-01-02 at 14:44 +1000, Nick Coghlan wrote:
 I've been having an occasional argument with Benjamin regarding braces
 in 4-line if statements:
 
   if (cond)
 statement;
   else
 statement;
 
 vs.
 
   if (cond) {
 statement;
   } else {
 statement;
   }
 
 He keeps leaving them out, I occasionally tell him they should always
 be included (most recently this came up when we gave conflicting
 advice to a patch contributor). He says what he's doing is OK, because
 he doesn't consider the example in PEP 7 as explicitly disallowing it,

 I think it's a recipe for future maintenance hassles when someone adds
 a second statement to one of the clauses but doesn't add the braces.

I've had to correct my self on this one a few times so I will have to
agree it's a good practice.

 (The only time I consider it reasonable to leave out the braces is for
 one liner if statements, where there's no else clause at all)

The problem is only when an additional statement is added to the last
block, not the preceding ones, as the compiler will complain about
those.  So I don't know how the 4 line example without braces is any
worse than a 2 line if without braces.

I think my preference is, if any block in a multi-block expression needs
braces, then the other blocks should have it.  (Including the last block
even if it's a single line.)

The next level up would be to require them on all blocks, even two line
if expressions, but I'm not sure that is really needed.  At some point
the extra noise of the braces makes things harder to read rather than
easier, and what you gain in preventing one type of error may increase
chances of another type of error not being noticed.

Cheers,
   Ron



 Since Benjamin doesn't accept the current brace example in PEP 7 as
 normative for the case above, I'm bringing it up here to seek
 clarification.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] generators and ceval

2011-12-15 Thread Ron Adam

Hi,  I Just added issue 13607 with a patch that removes the generator
specific checks and code out of the ceval PyEval_EvalFrameEx() function.

Those parts where moved up into the generator gen_send_ex() function.

Doing that removed the generator flag checks from the eval loop and made
it a bit cleaner.  In order to do that, I needed to give generators a
why to look at the 'why' value.  Doing that also cleaned up the code in
gen_sendex() as it can use the 'why' in a select instead of several
indirect if tests.

http://bugs.python.org/issue13607

Altogether it made yields about 10% faster, and everything else about
2%-3% faster (on average).   But it does need to be checked.

Cheers,
   Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Define Tracker workflow

2011-10-19 Thread Ron Adam
On Wed, 2011-10-19 at 22:17 +0300, anatoly techtonik wrote:
 Does everybody feel comfortable with 'stage' and 'resultion' fields in 
 tracker?
 
 I understand that 'stage' defines workflow and 'resolution' is status
 indicator, but the question is - do we really need to separate them?
 For example, right now when a ticket's 'status' is closed (all right -
 there is also a 'status' field), we mark 'stage' as
 'committed/rejected'. I see the 'stage' as a workflow state and
 'committed/rejected' value is confusing because further steps are
 actually depend on if the state is actually 'committed' or 'rejected'.
 
 stage: patch review - committed/rejected
 
 When I see a patch was rejected, I need to analyse why and propose a
 better one. To analyse I need to look at 'resolution' field:
 
   duplicate
   fixed
   invalid
   later
   out of date
   postponed
   rejected
   remind
   wont fix
   works for me
 
 The resolution will likely be 'fixed' which doesn't give any info
 about if the patch was actually committed or not. You need to know
 that there is 'rejected' status, so if the status 'is not rejected'
 then the patch was likely committed. Note that resolution is also a
 state, but for a closed issue.

It's somewhat confusing to me also.


 For me the only things in `status` that matter are - open and closed.
 Everything else is more descriptive 'state' of the issue. So I'd merge
 all our descriptive fields into single 'state' field that will accept
 the following values depending on master 'status':
 open:
   languishing
   pending
   needs test
   needs patch
   patch review
   commit review
 
 closed:
   committed
   duplicate
   fixed
   invalid
   out of date
   rejected
   wont fix
   works for me

I like the idea. But its not clear what should be set at what times.

While trying not to change too much, how about the following?

Status:
Open
Closed

Stage:
In progress:
needs fix   (More specific than the term 'patch'.)
needs test
needs docs
needs patch (Needs a combined fix/test/docs .diff file.)
needs patch review   (To Accepted if OK.)
languishing (To Rejected:out_of_date if no action soon.)
pending

Accepted:
commit review
committed  (And Close)

Rejected:   (Pick one and Close.)
duplicate
invalid
out of date
won't fix
cannot reproduce  (instead of 'works for me')

This combines the stage and resolution fields together.

Currently the stage is up in the upper left of a tracker page, while the
status and resolution is further down.  They should at least be moved
near each other.

  +--+--+---+
  | status:  | stage:   |resoltution:   |
  +--+--+---+

But it would be better if it was just...

  +++
  | status:| stage: |
  +++

And just list the stages like...

status: Open stage: In progress - needs docs

status: Open stage: In progress - needs patch review

status: Open stage: Accepted - commit review

status: Closed   stage: Accepted - committed

status: Closed   stage: Rejected - invalid

It's not entirely consistent because while it's open, the stage refers
to what is needed, but once it's closed, it refers to the last item
done.  But I think it would be fine that way.

As for more detailed info, you pretty much have to read the discussion.

Cheers,
   Ron






___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 395: Module Aliasing

2011-03-05 Thread Ron Adam



On 03/05/2011 01:14 AM, Nick Coghlan wrote:

On Sat, Mar 5, 2011 at 2:40 PM, Ron Adamr...@ronadam.com  wrote:

Fixing direct execution inside packages


+1 on both of these.

I don't see any major problems with these.  Any minor issues can be worked
out.

I suggest separating these two items out into their own PEP, and doing them
first.


This first draft includes everything I *want* to do. If I have to
scale the proposal back to get it accepted because I can't convince
enough other people it's a good idea... well, even Guido isn't immune
to that part of the PEP process (cf. PEP 340). It wouldn't be the
first PEP to have a Rejected Ideas section, and I'm sure it wouldn't
be the last.


Right, All of the reasons are good and some solution should be implemented 
for each of the issues you pointed out. The PEP process will help us figure 
out the best way to do that.




I definitely intend to implement this in a few different pieces,
though - the order of the 4 different fixes in the PEP also summarises
what I consider to be a reasonable development strategy as well.


Sounds good.

I haven't formed a good opinion on the last 2 items yet which was why I 
didn't comment on them.





This morning I did have some interesting thoughts.*

*It seems I'm starting to wake up with python code in my head.  (Is that 
good?)  In some cases, it's outside of the box solutions, but not always 
good ones. shrug ;-)



Anyway... this mornings fuzzy python thoughts were along the lines of...


On one hand, Python tries to make it so each objects source/parent info  is 
reachable from the object when possible.  The nice thing about that is, it 
can be used to create a tree of where everything is.  That doesn't work 
*easily* at the module level due to modules being external OS entities that 
can be reused in different contexts.


On the other hand, a modules import order is somewhat more dynamic compared 
to class inheritance order, so would it be better to have that info in a 
separate place rather than with each module?



Adding a second references to the '__main__' module begins to blur the 
purpose of sys.modules from being a place to keep imported modules to a 
place that also has some ordering information.  (Practicality over purity?, 
Or an indication of what direction to go in?)


And, if you ask for keys, items, or values, you will need to filter out 
'__main__' to avoid the double reference.



So I was thinking, what if we make sys.modules a little smarter.  The 
negative of that is, it would no longer be a simple dictionary object.



First idea ...

Add a __main__ attribute to sys.modules to hold the name of the main module.

Override modules.__setitem__, so it will catch '__main__' and set 
modules.__main__ to the name of the module, and put the module in under its 
real name instead of '__main__'.


Override modules.__getitem__, so it will catch '__main__' and return 
self[self.__main__].


Then keys(), values(), and items(), will not have the doubled main module 
references in them.


The name of the main module is easy to get.  ie... sys.modules.__main__

sys.modules[__name__] will still return the correct module if __name__ == 
__main__.




Second (more ambitious and less thought out) idea ...

Extend sys.modules so it is actually is a tree structure. The '__main__' 
module would be at the top of the tree.



Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 395: Module Aliasing

2011-03-05 Thread Ron Adam



On 03/05/2011 06:33 PM, Nick Coghlan wrote:

On Sun, Mar 6, 2011 at 4:11 AM, Ron Adamr...@ronadam.com  wrote:

Adding a second references to the '__main__' module begins to blur the
purpose of sys.modules from being a place to keep imported modules to a
place that also has some ordering information.  (Practicality over purity?,
Or an indication of what direction to go in?)

And, if you ask for keys, items, or values, you will need to filter out
'__main__' to avoid the double reference.

So I was thinking, what if we make sys.modules a little smarter.  The
negative of that is, it would no longer be a simple dictionary object.

First idea ...

Add a __main__ attribute to sys.modules to hold the name of the main module.

Override modules.__setitem__, so it will catch '__main__' and set
modules.__main__ to the name of the module, and put the module in under its
real name instead of '__main__'.

Override modules.__getitem__, so it will catch '__main__' and return
self[self.__main__].

Then keys(), values(), and items(), will not have the doubled main module
references in them.

The name of the main module is easy to get.  ie... sys.modules.__main__

sys.modules[__name__] will still return the correct module if __name__ ==
__main__.


That's quite an interesting idea - I hadn't even considered the
implications of double counting the affected module when iterating
over sys.modules in some fashion. That said, dropping `__main__` from
the iteration might confuse existing code, so it may be better to have
the lookup go the other way (i.e. define __missing__ on the dict
subclass and return sys.modules['__main__'] if the key matches
sys.modules.__main__).


... if the key matches sys.modules.__missing__

Works for me. ;-)

We can find a better name than __missing__ later.  (minor detail)

Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 395: Module Aliasing

2011-03-04 Thread Ron Adam



On 03/04/2011 09:30 AM, Nick Coghlan wrote:


Fixing dual imports of the main module
--

Two simple changes are proposed to fix this problem:

1. In ``runpy``, modify the implementation of the ``-m`` switch handling to
   install the specified module in ``sys.modules`` under both its real name
   and the name ``__main__``. (Currently it is only installed as the latter)
2. When directly executing a module, install it in ``sys.modules`` under
   ``os.path.splitext(os.path.basename(__file__))[0]`` as well as under
   ``__main__``.

With the main module also stored under its real name, imports will pick it
up from the ``sys.modules`` cache rather than reimporting it under a new name.



 Fixing direct execution inside packages

+1 on both of these.

I don't see any major problems with these.  Any minor issues can be worked out.

I suggest separating these two items out into their own PEP, and doing them 
first.



Cheers,
   Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] MSI: Remove dependency from win32com.client module (issue4080047)

2011-02-01 Thread Ron Adam



On 02/01/2011 09:51 AM, anatoly techtonik wrote:


So far only Georg explained what patches sent to mailing list will not
be reviewed, because there is too much volume. But bugtracker is not a
patch tracker. It doesn't allow to monitor incoming patches by module,
its search is very poor. Of course mailing lists are even worse in
this regard, but there is nothing Python community can't deal with.
The problem is to keep non-core people outside motivated, and the
biggest problem with current documented process is that nobody even
thinks about it.


I've seen quite a bit of changes over the years.  Yes, it's happens over 
years because the release schedule is fairly long.  They try not to 
interrupt the current schedule too much, so bigger changes to the 
development process are usually made after a major release is done, rather 
than during the middle.


Lately (the last two years) things have been quite a bit busier with the 
addition of python3.x.  Once we get to where we are (mostly) only 
concentrating on one major version again, then it will be easer to make 
process changes.  (Less things to mess up if it goes wrong.)


I think after this next release is completed you will see more efforts 
turning to improving the process.  Some of the vary things you have been 
trying to pointed out I think.


As far as patches getting attention, it's getting better there too.  Every 
time you make a comment or update an issue with a patch change, it gets 
reported to the bugs list.  Many of the core developers watch that and will 
add them self to the nosey list on that issue if it has something to do 
with the parts of python they know.  If you have a patch that you feel is 
complete and is ready to go into the next release or a bug fix for the 
current one, post a comment on the issue asking for a review.  Chances are 
you will get a reply in a few days.


I've found searching for other patches related to my patches helps. I can 
search the tracker or the bug list for the module or problem I'm working 
on.  It's really not that hard to find related issues.  Then I can post a 
comment on those issues when I can be of help, and also post on that issue 
a link to the related issue I'm working on.


Python is a large project with a *huge* user base.  So changes are 
considered very carefully. Probably the hardest part is making changes in a 
way that is very unlikely to break someone's program.  Mess up someone's 
pay role process some place (even by the smallest change) and people will 
get very unhappy really quick.  It's also not good to crash space shuttles 
or google. ;-)


Cheers,
  Ron







___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Exception __name__ missing?

2011-01-18 Thread Ron Adam



On 01/18/2011 01:14 AM, Georg Brandl wrote:


For these cases, you can use traceback.format_exception_only().


Thanks George,  That works nicely.

Ron ;-)


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Exception __name__ missing?

2011-01-17 Thread Ron Adam


Is this on purpose?


Python 3.2rc1 (py3k:88040, Jan 15 2011, 18:11:39)
[GCC 4.4.5] on linux2
Type help, copyright, credits or license for more information.
 Exception.__name__
'Exception'
 e = Exception('has no name')
 e.__name__
Traceback (most recent call last):
  File stdin, line 1, in module
AttributeError: 'Exception' object has no attribute '__name__'


Ron Adam


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Exception __name__ missing?

2011-01-17 Thread Ron Adam



On 01/17/2011 02:27 PM, Georg Brandl wrote:

Am 17.01.2011 21:22, schrieb Ron Adam:


Is this on purpose?


Python 3.2rc1 (py3k:88040, Jan 15 2011, 18:11:39)
[GCC 4.4.5] on linux2
Type help, copyright, credits or license for more information.
Exception.__name__
'Exception'
e = Exception('has no name')
e.__name__
Traceback (most recent call last):
File stdin, line 1, inmodule
AttributeError: 'Exception' object has no attribute '__name__'


It's not on purpose in the sense that it's not something special
to exceptions.  The class __name__ attribute is not accessible
from instances of any class.


Yes, I realised this on the way to an appointment.  Oh well. ;-)


What I needed was e.__class__.__name__ instead of e.__name__.

I should have thought about this a little more before posting.



The particular reason I wanted it was to format a nice message for 
displaying in pydoc browser mode.   The server errors, like a missing .css 
file, and any other server related errors, go the server console, while the 
content errors get displayed in a web page.  ie... object not found, or 
some other content related reason for not giving what was asked for.


Doing repr(e) was giving me too much.

UnicodeDecodeError('utf8', b'\x7fELF\x02\x01\x01\x00\x00\x00\x 

With pages of bytes, and I'd rather not truncate it, although that would be ok.

str(e) was more useful, but didn't include the exception name.

'utf8' codec can't decode byte 0xe0 in position 24: invalid 
continuation byte



So doing e.__name__ was the obvious next thing...  for some reason I 
expected the __name__ attribute in exception instances to be inherited from 
the class.  Beats me why. shrug


Thanks,
   Ron




___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 3.2 rc 1

2011-01-16 Thread Ron Adam


:-D


Great job Georg!

Ron Adam




On 01/16/2011 01:33 AM, Georg Brandl wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On behalf of the Python development team, I'm very happy to announce the
first release candidate of Python 3.2.

Python 3.2 is a continuation of the efforts to improve and stabilize the
Python 3.x line.  Since the final release of Python 2.7, the 2.x line
will only receive bugfixes, and new features are developed for 3.x only.

Since PEP 3003, the Moratorium on Language Changes, is in effect, there
are no changes in Python's syntax and built-in types in Python 3.2.
Development efforts concentrated on the standard library and support for
porting code to Python 3.  Highlights are:

* numerous improvements to the unittest module
* PEP 3147, support for .pyc repository directories
* PEP 3149, support for version tagged dynamic libraries
* PEP 3148, a new futures library for concurrent programming
* PEP 384, a stable ABI for extension modules
* PEP 391, dictionary-based logging configuration
* an overhauled GIL implementation that reduces contention
* an extended email package that handles bytes messages
* a much improved ssl module with support for SSL contexts and certificate
   hostname matching
* a sysconfig module to access configuration information
* additions to the shutil module, among them archive file support
* many enhancements to configparser, among them mapping protocol support
* improvements to pdb, the Python debugger
* countless fixes regarding bytes/string issues; among them full support
   for a bytes environment (filenames, environment variables)
* many consistency and behavior fixes for numeric operations

For a more extensive list of changes in 3.2, see

 http://docs.python.org/3.2/whatsnew/3.2.html

To download Python 3.2 visit:

 http://www.python.org/download/releases/3.2/

Please consider trying Python 3.2 with your code and reporting any bugs
you may notice to:

 http://bugs.python.org/


Enjoy!

- --
Georg Brandl, Release Manager
georg at python.org
(on behalf of the entire python-dev team and 3.2's contributors)

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.16 (GNU/Linux)

iEYEARECAAYFAk0yn1QACgkQN9GcIYhpnLDTdACgqQYW5ZmTLlxmppBZItprSj7I
TmAAn13lgnu9TdVy0Jln7VwOt5JW9CwL
=VZ3p
-END PGP SIGNATURE-

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] unit test needed

2011-01-11 Thread Ron Adam



On 01/10/2011 12:01 PM, Antoine Pitrou wrote:


Hello,

I would like to advocate again for the removal of the unit test
needed stage on the tracker, which regularly confuses our triagers
into thinking it's an actual requirement or expectation from
contributors and bug reporters.



This keeps coming up because the logic of the different things in the 
tracker are not as clearly defined as they could be.


There are differences between a sequential stage, and a non-sequential 
requirement or status.  Here's an example of separating those well.


Status: (Set as required)
  __  Bug   - Set in New stage.
  __  Feature-request   - Set in New stage.
  __  Commit-approved   - Set in Patch-ready stage.
  __  Closed-committed  - set in final stage.
  __  Closed-rejected   - Set in any stage. (Add message for reason.)

Stage:  (Set next stage as each stage is completed)
  __  New   - Check Validity, set Bug or Feature request
  status, and set Requirements as needed.
  __  Patch-development - Until requirements are satisfied.
  __  Patch-ready   - Set Commit-approved if/when accepted.
  __  Final - Set Closed-committed status after commit.

Requirements:  (Set all that is needed, preferable in New stage.)
  __  Code patch
  __  Test patch
  __  Docs patch
  __  PEP Needed.

User input:
  __ request-review - Set by tracker user.
  (Add message for reason.)


Notes:

+ Patch-ready is be a nicer description of the Commit-review stage.
+ Remove unittest needed from stage, as its a requirement, not a stage.
+ Languishing should be a keyword.
+ Pending is too vague! (please remove!)
+ Move feature-request from type to status.
+ Add bug to status.

Bug and Feature-request are an *issue status* as far as the tracker is 
concerned.  This allows both bugs and features to set *Type*.


Type refers to something in or about Python itself, rather than something 
in the tracker.  (Something the issue *addresses* in python.) That 
description fits well with the items already there.


An open status is the same as (not (closed-committed or closed-rejected)).

The placement of some items could be better.  Status, and priority would 
fit better in the classification section.  Stage would fit better in the 
process section.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] r87389 - in python/branches/py3k: Doc/library/unittest.rst Lib/unittest/case.py Misc/NEWS

2010-12-26 Thread Ron Adam



On 12/24/2010 02:03 PM, Raymond Hettinger wrote:


On Dec 24, 2010, at 10:56 AM, Terry Reedy wrote:


On 12/24/2010 11:09 AM, Michael Foord wrote:

On 22/12/2010 02:26, Terry Reedy wrote:

On 12/21/2010 7:17 AM, Michael Foord wrote:

My first priority is that doc and code match.
Close second is consistency (hence, ease of learning and use) between
various AssertXs.


Symmetrical diffs (element in first not in second, element in second not
in first) solves the problem without imposing an order on the arguments.


Where applicable, I prefer this as unambiguous output headings.


Could you explain what you mean?





I was referring back to an output example symmetric diff that was
clipped somewhere along the way:

In x not in y: ... In y not in x: ...

rather than just using -,+ prefixes which are not necessarily
self-explanatory. 'Not applicable' would refer to output from difflib
which necessarily is ordered.



FWIW, I think + and - prefixes are much better for diffs that some
made-up verbiage.  People are used to seeing diffs with + and -.
Anything else will be so contrived that it's net effect will be to make
the output confusing and hard to interpret.


Agree.



If you want, add two lines of explanation before the diff: + means in
x, not in y -  means in y, not it x

The notion  of making symmetric can easily get carried too far, which
a corresponding loss of useability.


I agree with this also.

I don't understand the effort to make the tests be symmetric when many of 
the tests are non-symmetric.  (see list below)


I think the terms expected and actual are fine and help more than they 
hurt.  I think of these as actual result and expected result. A clearer 
terminology might be expr and expected_result.


Where a tests can be used *as if* they are symmetric, but the diff context 
is reversed, I think that that is ok.  It just needs a entry in the docs 
that says that will happen if you do it. That won't break tests already 
written.


Also notice (in the list below) that the use of 'a' and 'b' do not indicate 
a test is symmetric, but instead are used where they are *not-symmetric*. 
First and second could be used for those, but I think 'a' and 'b' have less 
mental luggage when it comes to visually seeing the meaning of the method 
signature in those cases.


Tests where the order is not important usually use numbered but like 
arguments, such as expr1 and expr2 or list1 and list2.  This makes 
sense to me.  obj1 and obj2 are just two objects.


The terms x in y and x not in y look like what you should get from 
containment or regex asserts.


I guess what I'm try to say is think of the whole picture when trying to 
make improvements like these, an idea that works for one or two things may 
not scale well.


Cheers,
   Ron


Non-symmetric assert methods.

assertDictContainsSubset(self, expected, actual, msg=None)
assertFalse(self, expr, msg=None)
assertGreater(self, a, b, msg=None)
assertGreaterEqual(self, a, b, msg=None)
assertIn(self, member, container, msg=None)
assertIsInstance(self, obj, cls, msg=None)
assertIsNone(self, obj, msg=None)
assertIsNotNone(self, obj, msg=None)
assertLess(self, a, b, msg=None)
assertLessEqual(self, a, b, msg=None)
assertNotIn(self, member, container, msg=None)
assertIsInstance(self, obj, cls, msg=None)
assertIsNone(self, obj, msg=None)
assertIsNotNone(self, obj, msg=None)
assertNotIn(self, member, container, msg=None)
assertNotIsInstance(self, obj, cls, msg=None)
assertRegex(self, text, expected_regex, msg=None)
assertNotRegexMatches(self, text, unexpected_regex, msg=None)
assertRaises(self, excClass, callableObj=None, *args, **kwargs)
assertRaisesRegex(self, expected_exception, expected_regex,
   callable_obj=None, *args, **kwargs)
assertRegex(self, text, expected_regex, msg=None)
assertTrue(self, expr, msg=None)
assertWarns(self, expected_warning, callable_obj=None, *args, **kwargs)
assertWarnsRegex(self, expected_warning, expected_regex,
  callable_obj=None, *args, **kwargs)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] r87389 - in python/branches/py3k: Doc/library/unittest.rst Lib/unittest/case.py Misc/NEWS

2010-12-20 Thread Ron Adam



On 12/18/2010 04:46 PM, Terry Reedy wrote:

On 12/18/2010 3:48 PM, Antoine Pitrou wrote:

On Sat, 18 Dec 2010 21:00:04 +0100 (CET)
ezio.melottipython-check...@python.org wrote:

Author: ezio.melotti
Date: Sat Dec 18 21:00:04 2010
New Revision: 87389

Log:
#10573: use actual/expected consistently in unittest methods.


Change was requested by M. Foord and R. Hettinger (and G.Brandl for b2).


IMHO, this should be reverted. The API currently doesn't treat these
arguments differently, so they should really be labeled first and
second. Otherwise, the user will wrongly assume that the signature is
asymmetric and that they should be careful about which order they pass
the arguments in.


I've always presumed it would make a difference in error displayed anyway.



The error report on assert failure *is* often asymmetrical ;=).
 From Michael's post:
This is particularly relevant for the methods that produce 'diffed' output
on failure - as the order determines whether mismatched items are missing
from the expected or additional to the expected.

This change struck me as a nice bit of polishing.


I like (actual, expected) in the asserts.  It matches my expected 
ordering of input/output and how I use comparisons in 'if' statements.


I feel it is more important that the diffs are consistent with other diffs 
in python.


So (for me), changing the asymmetrical output to be symmetrical would be in 
the category of foolish consistency because changing that, introduces other 
inconsistencies I'd rather not have.


It doesn't bother me that the functions arguments aren't the same order of 
the diffs as long as the labels and wording are obvious enough in the 
messages.  So maybe the diff output can be improved a bit instead of 
changing the terms and ordering.


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python3k : imp.find_module raises SyntaxError

2010-12-01 Thread Ron Adam



On 12/01/2010 04:39 AM, Nick Coghlan wrote:

On Wed, Dec 1, 2010 at 8:22 PM, Greg Ewinggreg.ew...@canterbury.ac.nz  wrote:

Nick Coghlan wrote:


For the directory-as-module-not-package idea ...
you would need to be very careful with it,
since all the files would be sharing a common globals() namespace.


One of the things I like about Python's module system
is that once I know which module a name was imported
from, I also know which file to look in for its
definition. If a module can be spread over several
files, that feature would be lost.



There are many potential problems with the idea, I just chose to
mention one of the ones that could easily make the affected code
*break* :)


Right.  It would require additional pieces as well.

Ron :-)

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python3k : imp.find_module raises SyntaxError

2010-11-30 Thread Ron Adam



On 11/30/2010 01:41 PM, Brett Cannon wrote:

On Mon, Nov 29, 2010 at 12:21, Ron Adamr...@ronadam.com  wrote:



On 11/29/2010 01:22 PM, Brett Cannon wrote:


On Mon, Nov 29, 2010 at 03:53, Sylvain Thénault
sylvain.thena...@logilab.frwrote:


On 25 novembre 11:22, Ron Adam wrote:


On 11/25/2010 08:30 AM, Emile Anclin wrote:


hello,

working on Pylint, we have a lot of voluntary corrupted files to test
Pylint behavior; for instance

$ cat /home/emile/var/pylint/test/input/func_unknown_encoding.py
# -*- coding: IBO-8859-1 -*-
 check correct unknown encoding declaration


__revision__ = ''


and we try to find that module :
find_module('func_unknown_encoding', None). But python3 raises
SyntaxError
in that case ; it didn't raise SyntaxError on python2 nor does so on
our
func_nonascii_noencoding and func_wrong_encoding modules (with obvious
names)

Python 3.2a2 (r32a2:84522, Sep 14 2010, 15:22:36)
[GCC 4.3.4] on linux2
Type help, copyright, credits or license for more information.


from imp import find_module


find_module('func_unknown_encoding', None)


Traceback (most recent call last):
   File stdin, line 1, inmodule
SyntaxError: encoding problem: with BOM


I don't think there is a clear reason by design.  Also try importing
the same modules directly and noting the differences in the errors
you get.


IMO the point is that we can consider as a bug the fact that find_module
tries to somewhat read the content of the file, no? Though it seems to
only
doing this for encoding detection or like since find_module doesn't choke
on
a module containing another kind of syntax error.

So the question is, should we deal with this in pylint/astng, or can we
expect
this to be fixed at some point?


Considering these semantics changed between Python 2 and 3 w/o a
discernable benefit (I would consider it a negative as finding a
module should not be impacted by syntactic correctness; the full act
of importing should be the only thing that cares about that), I would
consider it a bug that should be filed.


The output of imp.find_module() returns an open file io object, and it's
output feeds directly into to imp.load_module().


imp.find_module('pydoc')

(_io.TextIOWrapper name=4 encoding='utf-8',
'/usr/local/lib/python3.2/pydoc.py', ('.py', 'U', 1))

So I think the imp.find_module() is suppose to be used when you *do* want to
do the full act of importing and not for just finding out if or where module
xyz exists.


Going with your line of argument, why can't imp.load_module be the
call that figures out there is a syntax error? If you look at this
from the perspective of PEP 302, finding a module has absolutely
nothing to do with the validity of the found source, just that
something was found somewhere which (hopefully) contains code that
represents the module.


The part that I'm looking at, is what would find_module return if the 
encoding is bad or not found for the encoding?


   _io.TextIOWrapper name=4 encoding='bad_encoding'


Maybe we could have some library introspection function in the inspect for 
just looking in the library rather than loading modules.  But I think those 
would have the same issues, as packages need to be loaded in order to find 
sub modules.*


* It almost seems like the concept of a sub-module (in a package) is 
flawed.  I'm not sure I can explain what causes me to feel that way at the 
moment though.


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python3k : imp.find_module raises SyntaxError

2010-11-29 Thread Ron Adam



On 11/29/2010 01:22 PM, Brett Cannon wrote:

On Mon, Nov 29, 2010 at 03:53, Sylvain Thénault
sylvain.thena...@logilab.fr  wrote:

On 25 novembre 11:22, Ron Adam wrote:

On 11/25/2010 08:30 AM, Emile Anclin wrote:


hello,

working on Pylint, we have a lot of voluntary corrupted files to test
Pylint behavior; for instance

$ cat /home/emile/var/pylint/test/input/func_unknown_encoding.py
# -*- coding: IBO-8859-1 -*-
 check correct unknown encoding declaration


__revision__ = ''


and we try to find that module :
find_module('func_unknown_encoding', None). But python3 raises SyntaxError
in that case ; it didn't raise SyntaxError on python2 nor does so on our
func_nonascii_noencoding and func_wrong_encoding modules (with obvious
names)

Python 3.2a2 (r32a2:84522, Sep 14 2010, 15:22:36)
[GCC 4.3.4] on linux2
Type help, copyright, credits or license for more information.

from imp import find_module

find_module('func_unknown_encoding', None)

Traceback (most recent call last):
   File stdin, line 1, inmodule
SyntaxError: encoding problem: with BOM


I don't think there is a clear reason by design.  Also try importing
the same modules directly and noting the differences in the errors
you get.


IMO the point is that we can consider as a bug the fact that find_module
tries to somewhat read the content of the file, no? Though it seems to only
doing this for encoding detection or like since find_module doesn't choke on
a module containing another kind of syntax error.

So the question is, should we deal with this in pylint/astng, or can we expect
this to be fixed at some point?


Considering these semantics changed between Python 2 and 3 w/o a
discernable benefit (I would consider it a negative as finding a
module should not be impacted by syntactic correctness; the full act
of importing should be the only thing that cares about that), I would
consider it a bug that should be filed.


The output of imp.find_module() returns an open file io object, and it's 
output feeds directly into to imp.load_module().


 imp.find_module('pydoc')
(_io.TextIOWrapper name=4 encoding='utf-8', 
'/usr/local/lib/python3.2/pydoc.py', ('.py', 'U', 1))


So I think the imp.find_module() is suppose to be used when you *do* want 
to do the full act of importing and not for just finding out if or where 
module xyz exists.



Ron



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] constant/enum type in stdlib

2010-11-29 Thread Ron Adam



On 11/28/2010 09:03 PM, Ron Adam wrote:


It does associate additional info to names and creates a nice dictionary to
reference.


  def name_values( FOO: 1,
BAR: Hello World!,
BAZ: dict(a=1, b=2, c=3) ):
... return FOO, BAR, BAZ
...
  foo(1,2,3)
(1, 2, 3)
  foo.__annotations__
{'BAR': 'Hello World!', 'FOO': 1, 'BAZ': {'a': 1, 'c': 3, 'b': 2}}


sigh... I havn't been very focused lately. That should have been:

 def named_values(FOO:1, BAR:Hello World!, BAZ:dict(a=1, b=2, c=3)):
...   return FOO, BAR, BAZ
...
 named_values.__annotations__
{'BAR': 'Hello World!', 'FOO': 1, 'BAZ': {'a': 1, 'c': 3, 'b': 2}}
 named_values(1, 2, 3)
(1, 2, 3)

Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] constant/enum type in stdlib

2010-11-28 Thread Ron Adam



On 11/27/2010 04:51 AM, Nick Coghlan wrote:

x = named_value(FOO, 1)
y = named_value(BAR, Hello World!)
z = named_value(BAZ, dict(a=1, b=2, c=3))

print(x, y, z, sep=\n)
print(\n.join(map(repr, (x, y, z
print(\n.join(map(str, map(type, (x, y, z)

set_named_values(globals(), foo=x._raw(), bar=y._raw(), baz=z._raw())
print(\n.join(map(repr, (foo, bar, baz
print(type(x) is type(foo), type(y) is type(bar), type(z) is type(baz))

==

# Session output for the last 6 lines

  print(x, y, z, sep=\n)

1
Hello World!
{'a': 1, 'c': 3, 'b': 2}


  print(\n.join(map(repr, (x, y, z

FOO=1
BAR='Hello World!'
BAZ={'a': 1, 'c': 3, 'b': 2}


This reminds me of python annotations.  Which seem like an already 
forgotten new feature.  Maybe they can help with this?



It does associate additional info to names and creates a nice dictionary to 
reference.



 def name_values( FOO: 1,
 BAR: Hello World!,
 BAZ: dict(a=1, b=2, c=3) ):
...   return FOO, BAR, BAZ
...
 foo(1,2,3)
(1, 2, 3)
 foo.__annotations__
{'BAR': 'Hello World!', 'FOO': 1, 'BAZ': {'a': 1, 'c': 3, 'b': 2}}


Cheers,
  Ron










___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] python3k : imp.find_module raises SyntaxError

2010-11-25 Thread Ron Adam



On 11/25/2010 08:30 AM, Emile Anclin wrote:


hello,

working on Pylint, we have a lot of voluntary corrupted files to test
Pylint behavior; for instance

$ cat /home/emile/var/pylint/test/input/func_unknown_encoding.py
# -*- coding: IBO-8859-1 -*-
 check correct unknown encoding declaration


__revision__ = ''


and we try to find that module :
find_module('func_unknown_encoding', None). But python3 raises SyntaxError
in that case ; it didn't raise SyntaxError on python2 nor does so on our
func_nonascii_noencoding and func_wrong_encoding modules (with obvious
names)

Python 3.2a2 (r32a2:84522, Sep 14 2010, 15:22:36)
[GCC 4.3.4] on linux2
Type help, copyright, credits or license for more information.

from imp import find_module
find_module('func_unknown_encoding', None)

Traceback (most recent call last):
   File stdin, line 1, inmodule
SyntaxError: encoding problem: with BOM

find_module('func_wrong_encoding', None)

(_io.TextIOWrapper name=5 encoding='utf-8', 'func_wrong_encoding.py',
('.py', 'U', 1))

find_module('func_nonascii_noencoding', None)

(_io.TextIOWrapper name=6 encoding='utf-8',
'func_nonascii_noencoding.py', ('.py', 'U', 1))


So what is the reason of this selective behavior?
Furthermore, there is BOM in our func_unknown_encoding.py module.


I don't think there is a clear reason by design.  Also try importing the 
same modules directly and noting the differences in the errors you get.


For example, the problem that brought this to my attention in python3.2.

 find_module('test/badsyntax_pep3120')
Segmentation fault

 from test import badsyntax_pep3120
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/local/lib/python3.2/test/badsyntax_pep3120.py, line 1
SyntaxError: Non-UTF-8 code starting with '\xf6' in file 
/usr/local/lib/python3.2/test/badsyntax_pep3120.py on line 1, but no 
encoding declared; see http://python.org/dev/peps/pep-0263/ for details



The import statement uses parser.c, and tokenizer.c indirectly, to import a 
file, but the imp module uses tokenizer.c directly.  They aren't consistent 
in how they handle errors because the different error messages are 
generated in different places depending on what the error is, *and* what 
the code path to get to that point was, *and* weather or not a filename was 
set.  For the example above with imp.findmodule(), the filename isn't set, 
so you get a different error than if you used import, which uses the parser 
module and that does set the filename.


From what I've seen, it would help if the imp module was rewritten to use 
parser.c like the import statement does, rather than tokenizer.c directly. 
The error handling in parser.c is much better than tokenizer.c.  Possibly 
tokenizer.c could be cleaned up after that and be made much simpler.


Ron Adam














___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] constant/enum type in stdlib

2010-11-23 Thread Ron Adam



On 11/23/2010 12:07 PM, Antoine Pitrou wrote:

Le mardi 23 novembre 2010 à 12:50 -0500, Isaac Morland a écrit :

Each enumeration is a type (well, OK, not in every language, presumably,
but certainly in many languages).  The word basic is more important than
types in my sentence - the point is that an enumeration capability is a
very common one in a type system, and is very general, not specific to any
particular application.


Python already has an enumeration capability. It's called range().
There's nothing else that C enums have. AFAICT, neither do enums in
other mainstream languages (assuming they even exist; I don't remember
Perl, PHP or Javascript having anything like that, but perhaps I'm
mistaken).



Aren't we forgetting enumerate?

 colors = 'BLACK BROWN RED ORANGE YELLOW GREEN BLUE VIOLET GREY WHITE'

 dict(e for e in enumerate(colors.split()))
{0: 'BLACK', 1: 'BROWN', 2: 'RED', 3: 'ORANGE', 4: 'YELLOW', 5: 'GREEN', 6: 
'BLUE', 7: 'VIOLET', 8: 'GREY', 9: 'WHITE'}


 dict((f, n) for (n, f) in enumerate(colors.split()))
{'BLUE': 6, 'BROWN': 1, 'GREY': 8, 'YELLOW': 4, 'GREEN': 5, 'VIOLET': 7, 
'ORANGE': 3, 'BLACK': 0, 'WHITE': 9, 'RED': 2}



Most other languages that use numbered constants number them by base n^2.

 [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]


Binary flags have the advantage of saving memory because you can assign 
more than one to a single integer.  Another advantage is other languages 
use them so it can make it easier interface with them.   There also may be 
some performance advantages as well since you can test for multiple flags 
with a single comparison.


Sets of strings can also work when you don't need to associate a numeric 
value to the constant.  ie... the constant is the value.  In this case the 
set supplies the api.


Cheers,
  Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] constant/enum type in stdlib

2010-11-23 Thread Ron Adam

Oops..  x**2 should have been 2**x below.


On 11/23/2010 03:03 PM, Ron Adam wrote:



On 11/23/2010 12:07 PM, Antoine Pitrou wrote:

Le mardi 23 novembre 2010 à 12:50 -0500, Isaac Morland a écrit :

Each enumeration is a type (well, OK, not in every language, presumably,
but certainly in many languages). The word basic is more important than
types in my sentence - the point is that an enumeration capability is a
very common one in a type system, and is very general, not specific to any
particular application.


Python already has an enumeration capability. It's called range().
There's nothing else that C enums have. AFAICT, neither do enums in
other mainstream languages (assuming they even exist; I don't remember
Perl, PHP or Javascript having anything like that, but perhaps I'm
mistaken).



Aren't we forgetting enumerate?

  colors = 'BLACK BROWN RED ORANGE YELLOW GREEN BLUE VIOLET GREY WHITE'

  dict(e for e in enumerate(colors.split()))
{0: 'BLACK', 1: 'BROWN', 2: 'RED', 3: 'ORANGE', 4: 'YELLOW', 5: 'GREEN', 6:
'BLUE', 7: 'VIOLET', 8: 'GREY', 9: 'WHITE'}

  dict((f, n) for (n, f) in enumerate(colors.split()))
{'BLUE': 6, 'BROWN': 1, 'GREY': 8, 'YELLOW': 4, 'GREEN': 5, 'VIOLET': 7,
'ORANGE': 3, 'BLACK': 0, 'WHITE': 9, 'RED': 2}


Most other languages that use numbered constants number them by base n^2.

  [x**2 for x in range(10)]
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]



 [2**x for x in range(10)]
[1, 2, 4, 8, 16, 32, 64, 128, 256, 512]



Binary flags have the advantage of saving memory because you can assign
more than one to a single integer. Another advantage is other languages use
them so it can make it easier interface with them. There also may be some
performance advantages as well since you can test for multiple flags with a
single comparison.

Sets of strings can also work when you don't need to associate a numeric
value to the constant. ie... the constant is the value. In this case the
set supplies the api.

Cheers,
Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Breaking undocumented API

2010-11-10 Thread Ron Adam



On 11/10/2010 01:33 PM, Raymond Hettinger wrote:


On Nov 10, 2010, at 5:47 AM, Michael Foord wrote:



So it is obvious that we don't have a clearly stated policy for what defines 
the public API of standard library modules.

How about making this explicit (either pep 8 or our developer docs):


I believe the point of Guido's email was that it is a situation dependent 
judgment call and not readily boiled down to a set of rules for PEP 8.


The way I read Guido's email is that it is a situation dependent judgment 
call for those cases that aren't clear.


I think what Micheal is trying to say is for us to agree on some things so 
we can go forward with a little more clarity.


Cheers,
   Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backward incompatible API changes in the pydoc module

2010-11-08 Thread Ron Adam



On 11/08/2010 09:12 AM, exar...@twistedmatrix.com wrote:

On 11:44 am, ncogh...@gmail.com wrote:

All,

I was about to commit the patch for issue 2001 (the improvements to
the pydoc web server and the removal of the Tk GUI) when I realised
that pydoc.serve() and pydoc.gui() are technically public standard
library APIs (albeit undocumented ones).

Currently the patch switches serve() to start the new server
implementation and gui() to start the server and open a browser window
for it.

It occurred to me that, despite the it's an application feel to the
pydoc web server APIs, it may be a better idea to leave the two
existing functions alone (aside from adding DeprecationWarning), and
using new private function names to start the new server and the web
browser.

Is following the standard deprecation procedure the better course
here, or am I being overly paranoid?


Following the deprecation procedure here sounds awesome to me. Thanks
for considering it, I hope you'll choose to go that way.


I want to be clear on what isn't changing.

All of the help() function features that python depends on and any of the 
code that is required for that is staying the same.


All of the static html document generating features and code that depend on 
that, is staying the same.  These static pages do not depend on any parts 
of pydoc after they are generated.


Those are the parts that are most likely to be used in other applications 
as well.



The new changes only effect the interactive browsing mode. The tk search 
box was removed.  By doing that, it enabled the browser interface, to be 
used on systems that don't have tk installed.


The html web server was rewritten and a search feature was added so that 
you can do the same searches in the web browser that you did in the tk 
search box.


Do you (or anyone) know of any programs that access pydocs tk search 
window, or it's server parts directly?  The server was so specific and 
included very specific pydoc html code, so it would have been very 
difficult for it to be used for anything else.  Any thoughts?


I think the main issues Nick is concerned with is the functions and options 
used to start pydoc in the interactive mode.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Breaking undocumented API

2010-11-08 Thread Ron Adam



On 11/08/2010 01:58 PM, Brett Cannon wrote:

On Mon, Nov 8, 2010 at 09:20, Alexander Belopolsky
belopol...@users.sourceforge.net  wrote:

Was: [issue2001] Pydoc interactive browsing enhancement

On Sun, Nov 7, 2010 at 9:17 AM, Nick Coghlanrep...@bugs.python.org  wrote:
..


I'd actually started typing out the command to commit this before it finally 
clicked that the patch changes public
APIs of the pydoc module in incompatible ways. Sure, they aren't documented, 
but the fact they aren't protected
by an underscore means I'm not comfortable with the idea of removing them or 
radically change their functionality
without going through a deprecation period first.



I have a similar issue with the trace module and would appreciate some
guidance on this as well.  The trace module documented API includes
just the Trace class, but the module defines several helper functions
and classes  that do not start with a leading underscore and are not
excluded from * imports by __all__.  (There is no trace.__all__.)


I think we need to, as a group, decide how to handle undocumented APIs
that don't have a leading underscore: they get treated just the same
as the documented APIs, or are they private regardless and thus we can
change them at our whim?


My understanding is that anything with an actual docstring is part of the 
public API.  Any thing with a leading underscore is private.


And to a lesser extent, objects with out docstrings, but have comments 
instead or nothing, may change, so don't depend on them.  Thankfully most 
things do have docstrings.




I freely admit that I have more questions than answers, so I would
like to hear from a wider audience.


The main reason I have said that non-underscore names should be
properly deprecated (assuming they are not contained in an
underscored-named module) is that dir() and help() do not distinguish.
If you are perusing a module from the interpreter prompt you have no
way to know whether something is public or private if it lacks an
underscore. Is it reasonable to assume that any API found through
dir() or help() must be checked with the official docs before you can
consider using it, even if you have no explicit need to read the
official docs?

I (unfortunately) say no, which is why I have argued that
non-underscored names need to be properly deprecated. This obviously
places a nasty burden on us, though, so I don't like taking this
position. Unless we can make it clearly known through help() or
something that the official docs must be checked to know what can and
cannot be reliably used I don't think it is reasonable to force users
to not be able to rely on help() (we should probably change help() to
print a big disclaimer for anything with a leading underscore,
though).


+1 on the help disclaimer for objects with leading underscores.

Currently help() does not see comments when they are used in place of a 
docstring.  I think it would be easy to have help notate things with no 
docstrings as Warning: Undocumented object. Use at your own risk.


At first, it would probably have a nice side effect of getting any public 
API's documented with doc strings. (if they aren't already.)




But that doesn't mean we can't go through, fix up our names, and
deprecate the old public names; that's fair game in my book.


I agree.


It may also be useful to clarify that importing some utility modules is 
not recommended because they may be changed more often and may not follow 
the standard process.  Would something like the following work, but still 
allow for importing if the exception is caught with a try except?


if __name__ == __main__:
main()
else:
raise ImportWarning(This is utility module and may be changed.)

Cheers,
  Ron
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Breaking undocumented API

2010-11-08 Thread Ron Adam



On 11/08/2010 04:01 PM, Brett Cannon wrote:


My understanding is that anything with an actual docstring is part of the
public API.  Any thing with a leading underscore is private.


That's a bad rule. Why shouldn't I be able to document something that
is not meant for the public so that fellow developers know what the
heck should be going on in the code?


You can use comments instead of a docstring.

Here are the possible cases concerned with the subject.  I'm using 
functions here for these examples, but this also applies to other objects.



def public_api():
 Should always have a nice docstring. 
...


def _private_api():
#
# Isn't it a good practice to use comments here?
#
...


def _publicly_documented_private_api():
  Not sure why you would want to do this
 instead of using comments.

...


def undocumented_public_api():
...


def _undocumented_private_api():
...


Out of these, the two that are problematic are the 
_publicly_documented_private_api() and the undocumented_public_api().


The _publicly_documented_private_api() is a problem because people *will* 
use it even though it has a leading underscore.  Especially those who are 
new to python.


The undocumented_public_api() wouldn't be a problem if all private api's 
used leading  underscore, but for older modules, it isn't always clear what 
the intention was.  Was it undocumented because the programmer simply 
forgot, or was it intended to be a private api?





It may also be useful to clarify that importing some utility modules is
not recommended because they may be changed more often and may not follow
the standard process.  Would something like the following work, but still
allow for importing if the exception is caught with a try except?

if __name__ == __main__:
main()
else:
raise ImportWarning(This is utility module and may be changed.)


Sure it would work, but that doesn't make it pleasant to use. It
already breaks how warnings are typically handled by raising it
instead of calling warnings.warn(). Plus I'm now supposed to
try/except certain imports? That's messy. At that point we are coding
in visibility rules instead of following convention and that doesn't
sit well with me.


No, you're not suppose to try/except imports.  That's the point.

You can do that, only if you really want to abuse the intended purpose of a 
module that isn't meant to be imported in the first place.  If someone 
wants to do that, it isn't a problem.  They are well aware of the risks if 
they do it.  (This is just one option and probably one that isn't thought 
out very well.)


Brett, I'm sure you can up with a better alternative.   ;-)

Cheers,
  Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backward incompatible API changes in the pydoc module

2010-11-08 Thread Ron Adam



On 11/08/2010 05:44 AM, Nick Coghlan wrote:

All,

I was about to commit the patch for issue 2001 (the improvements to
the pydoc web server and the removal of the Tk GUI) when I realised
that pydoc.serve() and pydoc.gui() are technically public standard
library APIs (albeit undocumented ones).

Currently the patch switches serve() to start the new server
implementation and gui() to start the server and open a browser window
for it.

It occurred to me that, despite the it's an application feel to the
pydoc web server APIs, it may be a better idea to leave the two
existing functions alone (aside from adding DeprecationWarning), and
using new private function names to start the new server and the web
browser.

Is following the standard deprecation procedure the better course
here, or am I being overly paranoid?


What do you think about adding a new _pydoc3.py module along with a 
pydoc3.py loader module with a basic user api.  The number 3, so that it 
match's python3.x.


We can then keep the old pydoc.py unchanged and be free to make a lot more 
changes to the _pydoc3.py file without having to be even a little paranoid.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Breaking undocumented API

2010-11-08 Thread Ron Adam



On 11/08/2010 07:18 PM, Brett Cannon wrote:

On Mon, Nov 8, 2010 at 16:10, Ron Adamr...@ronadam.com  wrote:



def _private_api():
#
# Isn't it a good practice to use comments here?
#
...


That is ugly. I already hate doing that for unittest, I'm not about to
champion that for anything else.


Ugly?  I suppose it's a matter of what you are used to.



It would also lead to essentially requiring a docstrings for
everything that is public whether someone wants to bother to writing a
docstring or not. I don't think we should be suggesting that a
docstring be required either.


I can see where that would be overly strict in an application or script 
made with python.


But it seems odd to me, to have undocumented api's in a programming 
language.  If it's being replaced with something else, the doc string can 
say that.  A null string is also a valid doc string if you just need a 
place holder until someone gets to it.


shrug



Brett, I'm sure you can up with a better alternative.   ;-)


But I don't want to have to do that in the stdlib by remembering what
modules I should or should not import. This is just as much about
developer burden on core devs as it is making sure we don't yank the
rug out from underneath users.


Yes, I agree.  But how to best do that?



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backward incompatible API changes in the pydoc module

2010-11-08 Thread Ron Adam



On 11/08/2010 10:26 PM, Nick Coghlan wrote:

On Tue, Nov 9, 2010 at 11:18 AM, Ron Adamr...@ronadam.com  wrote:

What do you think about adding a new _pydoc3.py module along with a
pydoc3.py loader module with a basic user api.  The number 3, so that it
match's python3.x.

We can then keep the old pydoc.py unchanged and be free to make a lot more
changes to the _pydoc3.py file without having to be even a little paranoid.


I think changing the behaviour of the pydoc command line app is a fine
idea - it's only the pydoc.serve and pydoc.gui functions that are
worrying me. As I noted on the tracker issue, there's a reasonably
clean way to do this, even given the coupling between the 3.1 GUI app
and server: leave the existing serve() and gui() functions alone
(aside from adding DeprecationWarning), and add your new
implementation as a parallel private API.


Ok, I guess that's what needs to be done then.  I can try to do it over the 
next few days, and will probably need a bit more advise on how to add in 
the depreciation warnings.  Or if you want to go ahead and do it, I'm more 
than OK with that.


Thanks for the help on this.  I do appreciate it.

Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary of Python tracker Issues

2010-11-06 Thread Ron Adam



On 11/06/2010 12:01 PM, Terry Reedy wrote:

On 11/6/2010 11:42 AM, R. David Murray wrote:

On Sat, 06 Nov 2010 15:38:22 +0100, Georg Brandlg.bra...@gmx.net wrote:

Am 06.11.2010 05:44, schrieb Ezio Melotti:

Hi,

On 05/11/2010 19.08, Python tracker wrote:

ACTIVITY SUMMARY (2010-10-29 - 2010-11-05)
Python tracker at http://bugs.python.org/

To view or respond to any of the issues listed below, click on the
issue.
Do NOT respond to this message.

Issues counts and deltas:
open 2514 (+17)


This seems wrong. A default search for open issues returns 2452 and it
was about the same yesterday just a few hours after the report.


closed 19597 (+78)
total 22111 (+95)


as suggested in recent mails[0][1] I changed these values to represent
the deltas with the previous week.
Now let's try to keep the open delta negative ;)


Since there were more issues closed than opened I think it really was.
Anyway, we are down 300 from the 2750 peak.


Current status from the tracker...

   don't care: 22134
   not closed:  2491
   not selected:   1

   open:2451
   languishing:   25
   pending:   39
   closed: 19604


That gives us...

 2451 open
1 not selected
   39 pending
   25 languishing
 
 2516 Total open


 2451 open
   39 languishing
1 not selected
 
 2491 total not closed


 19604 closed
  2491 not closed
39 pending
 -
 22134 Total issues



My guess as to how this got this way, is that different fields were merged 
at some time where the meanings didn't quite match up. shrug



It would be nicer if...

   closed + not_closed = total issues

   closed + open + not_selected = total issues


Pending and languishing should be keywords or sub categories of open.

Cheers,
   Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary of Python tracker Issues

2010-11-06 Thread Ron Adam




Current status from the tracker...

don't care: 22134
not closed: 2491
not selected: 1

open: 2451
languishing: 25
pending: 39
closed: 19604


That gives us...

2451 open
1 not selected
39 pending
25 languishing

2516 Total open


2451 open
39 languishing


Should be 39 pending here, not languishing.

--Ron


1 not selected

2491 total not closed


19604 closed
2491 not closed
39 pending
-
22134 Total issues


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On breaking modules into packages Was: [issue10199] Move Demo/turtle under Lib/

2010-10-27 Thread Ron Adam



On 10/27/2010 08:51 AM, Nick Coghlan wrote:

On Wed, Oct 27, 2010 at 2:33 PM, Ron Adamr...@ronadam.com  wrote:

I still would like to know what your thoughts are concerning where to put,
and/or how to package, the simple threaded text server?


Given the time frame until beta 1, I'd suggest leaving it in pydoc for
now. We can look at possibly documenting it and moving it to
http.servers for 3.3.

(The patch is attached to issue 2001 for anyone else that wants to
take a look at it)

Cheers,
Nick.


Fantastic!

Thanks Nick


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On breaking modules into packages Was: [issue10199] Move Demo/turtle under Lib/

2010-10-26 Thread Ron Adam


On 10/26/2010 02:34 PM, Raymond Hettinger wrote:


FWIW, it wasn't that big (approx 2500 lines).
The argparse, difflib, doctest, pickletools, pydoc, tarfile modules
are about the same size and the decimal module is even larger.
Please don't split those.


Sense you mention this...

I've worked on pydoc to make it much nicer to use in a browser.  While 
doing that I needed to reworked the server part.  That resulted in a clean 
server thread object (and supporting parts) with no pydoc specific code in 
those parts.  It can work as a stand alone module quite nicely.  It's about 
170 lines with around a third of that as documented examples that can also 
run as doctests.


More to the point, it's a simple text/html server wrapped in a thread 
object.  It can work as a starting point to using a browser as a user 
interface like pydoc does.


There is a patch in the bug tracker,  I just need to make some minor 
updates to it and it can go in, but I really need some code 
organizing/placement review help.


I I'm wonder what you may think.  Keep it in pydoc or move it to the HTTP 
package?  Document it or not?


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On breaking modules into packages Was: [issue10199] Move Demo/turtle under Lib/

2010-10-26 Thread Ron Adam



On 10/26/2010 05:35 PM, Raymond Hettinger wrote:


On Oct 26, 2010, at 2:54 PM, Ron Adam wrote:


I've worked on pydoc to make it much nicer to use in a browser.


While you're at it.  Can you please modernize the html
and create a style sheet?  Right now, all of formatting
is deeply intertwined with content generation.



Fixing that would be a *huge* improvement.


Half way there!  The server will read one if it exists.   ;-)

I'd really like to get this part in before 3.2 beta, and then I'll add a 
basic style sheet and update the html code to use it for 3.3.


The present patch fixes and updates all the functional parts and allows you 
to do every thing that you can do on the command line, but a LOT easier.


I think You, Nick. or one of the other Core developers could probably have 
this finished up in an afternoon if you really wanted.  All the parts work. 
 It's more about checking and adjusting the packaging at this point.


Cheers,
   Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] On breaking modules into packages Was: [issue10199] Move Demo/turtle under Lib/

2010-10-26 Thread Ron Adam



On 10/26/2010 05:35 PM, Raymond Hettinger wrote:


On Oct 26, 2010, at 2:54 PM, Ron Adam wrote:



I wonder what you may think.  Keep it in pydoc or move it to the
HTTP package?  Document it or not?



I still would like to know what your thoughts are concerning where to put, 
and/or how to package, the simple threaded text server?


Cheers,
   Ron
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Distutils2 scripts

2010-10-21 Thread Ron Adam



On 10/21/2010 07:13 PM, Greg Ewing wrote:

Eric Smith wrote:

Or for that matter a plain pysetup. It would be the one that a plain
python would get you.


If 'pysetup' is simply a shell script that invokes 'python -m setup'
using the current search path, I guess that's true.

On Windows, however, it seems to me that the current 'python setup.py'
scheme has advantages, since it lets you simply invoke 'setup.py' and
rely on file associations to get you the current python. Supporting
either 'python -m setup' or 'pysetup' out of the box would require
install-time path hacking of the sort that some people are uncomfortable
about.



Terek said this in the first post of this thread...

I just wanted to make sure that once distutils2 is back in the stdlib,
it's OK for us to add that script in the distribution.


When it's in the stdlib, the -m option should work just like any other 
script run from the stdlib.


What path hacking are you thinking of?

Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Distutils2 scripts

2010-10-20 Thread Ron Adam



On 10/12/2010 09:59 AM, Barry Warsaw wrote:

On Oct 12, 2010, at 12:24 PM, Greg Ewing wrote:


Giampaolo Rodolà wrote:


If that's the case what would I type in the command prompt in order to
install a module?
C:\PythonXX\pysetup.exe?
If so I would strongly miss old setup.py install.


Another thing bothers me about this. With the current scheme,
if you have multiple Pythons available, it's easy to be sure
that you're installing into the right one, because it's the
one that you use to run setup.py. Whereas if installation is
performed by a different executable, there's a possibility
of them being out of sync.

So I think I'd prefer some scheme involving 'python -m ...'
or some other option to Python itself, rather than a separate
executable.


This is why I suggested that 'setup.sh' (or whatever) take a --python-version
option to select the python executable to use.

Whatever solution is implemented definitely needs to take the
multiple-installed pythons into account.


On Ubuntu, I use python, python2.7, python3.1, python3.2 and that is what I 
type to use that particular version.  The -m option seems to me to be the 
easiest to do and works with all of these.


python2.7 -m setup
python3.2 -m setup

I don't see why that isn't an acceptable solution to this? shrug

It's not any different than doing ...

python3.2 -m test.regrtest
python3.2 -m pydoc -g
python3.2 -m idlelib.idle
python3.2 -m this
python3.2 -m turtle
python3.2 -m timeit -h
python3.2 -m trace --help
python3.2 -m dis filename.py
python3.2 -m zipfile

There are probably others I don't remember or know about.

The point is, without the handy '-m', you have to know where the file is, 
or set environment variables, or create .bat and/or .sh files, and those 
takes a lot more work.  So why not just embrace it and move on?


Ron






___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About resolution “accepted” on the tracker

2010-10-18 Thread Ron Adam



On 10/18/2010 07:07 PM, R. David Murray wrote:


Seriously, though, what it indicates is indicates is that we need a unit
test for the patch to be complete.  We have a number of issues with
patches but no tests, I believe.  Which order 'unit test' and 'fix'
occur in is arbitrary in practice.  I certainly prefer to have the unit
tests first myself, though.

The problem is that the stage field really isn't all that useful. I'd
prefer a set of check boxes, as I've suggested in the wiki.

I was the one who advocated labeling it 'unit test needed', but if
people would rather change it back to just 'test needed', I will raise
no objection, since in practice trying to squeeze the meaning I wanted
into the stage field doesn't really work.


This is about communicating both content and quality, with several
decisions at certain points.  It's not a straight line step by step 
process, which is why it gets confusing if you try to treat it as such. 
Check boxes work well for things like this.


Could something like (or parts of) the following work?  It would have 
assignment and module keywords items as well.



[] boxes can be set or unset by both the bug/feature author and those with
tracker/commit privileges.

{} boxes settable only by those with tracker and commit privileges?


Title []

Description [___...]


TYPE:
[] Bug
Versions:
   [] 2.5 [] 2.6[] 2.7
   [] 3.0 [] 3.1[] 3.2
...
   {} Verified

[] Feature Request
For Version [_]*Drop down list.

{} Initial approval*May be rejected later.
Requires PEP:
{} yes   PEP Number [__]
{} no

STATUS:
Components:
[] Code Patch {} approved
[] Test patch {} approved
[] Document Patch {} approved
[] News Entry Patch   {} approved
[] Final Commit Approval  {} approved

{} Committed
Revision #(s) {}
Notes:{}

{} Rejected
Reason:{___}  *Drop down list here.

{} Closed


ATTENTION REQUESTS:
   [] Request bug verification  *Auto set when new bug.
   [] Request feature initial approval  *Auto set when new.

   [] Request author reassignment
  *Current author unable to work on this.

   Request test patch  [] review  [] approval
   Request code patch  [] review  [] approval
   Request document patch  [] review  [] approval
   Request news patch  [] review  [] approval
   Request final commit[] review  [] approval

   *More than one attention request can be set at a time.
   *Reviewer clears request if revision is needed or item is ok.

Most changes to any of these should also be documented in the discussion 
area as well.


Ron
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My work on Python3 and non-ascii paths is done

2010-10-18 Thread Ron Adam


On 10/18/2010 08:53 PM, Victor Stinner wrote:

Hi,

Seven months after my first commit related to this issue, the full test suite
of Python 3.2 pass with ASCII, ISO-8859-1 and UTF-8 locale encodings in a non-
ascii source directory. It means that Python 3.2 now process correctly
filenames in all modules, build scripts and other utilities, with any locale
encoding.


[...]

Congratulations Victor,  From what I saw it looked like it was a lot of work.



I don't suppose you could take a look at this issue also?

   http://bugs.python.org/issue9319

(If not, maybe someone else can.)


When pydoc uses imp to search for modules it runs across a test file with a 
bad BOM.  Which then causes a segfault.


r...@gutsy:~/svn/py3k$ ./python
Python 3.2a3+ (py3k:85719, Oct 18 2010, 22:32:47)
[GCC 4.4.3] on linux2
Type help, copyright, credits or license for more information.
 help('modules ')

Here is a list of matching modules.  Enter any module name to get more help.

Segmentation fault


Or more directly...

 import imp
 imp.find_module('test/badsyntax_pep3120')
Segmentation fault


I believe it should issue a SyntaxError instead.

Thanks,
   Ron Adam








___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Relative imports in Py3k

2010-10-11 Thread Ron Adam



On 10/11/2010 07:27 AM, Nick Coghlan wrote:

On Mon, Oct 11, 2010 at 1:54 AM, anatoly techtoniktechto...@gmail.com  wrote:

On Sun, Sep 26, 2010 at 2:32 PM, Nick Coghlanncogh...@gmail.com  wrote:

This is almost certainly failing because the directory containing the
spyderlib package isn't on sys.path anywhere (instead, whichever
directory contains the script you executed directly will be in there,
which will be somewhere inside the package instead of outside it). Put
the appropriate directory in PYTHONPATH and these tests should start
working.


This is a hack. I use relative imports, because I don't want to care
about PYTHONPATH issues. I work with two clones of spyderlib
(reference point and feature branch). You propose to switch PYTHONPATH
every time I want to execute debug code in the __main__ section from
either of them.


Anatoly, unconstructive responses like this are why people often react
negatively to your attempts to be helpful.

I specifically mentioned 2 things you could do:
- modify PYTHONPATH
- use -m to execute your modules and just switch your current working
directory depending on which version of spyderlib you want to execute


I don't recall Anatoly saying which p3k version and revision he was using. 
 Relative imports was broken for while in 3.2.  It's fixed now and I 
presume he is using a fairly current revision of 3.2.


When you do a make install for 3.2 on Ubuntu, the current directory path 
, isn't perpended to sys.path.  I don't know if that is an over site or 
not, but it could be a factor.



A few more suggestions ...

Make A test runner script which modifies sys.path.  It also could be 
considered a hack, but it doesn't require modifying PYTHONPATH, so it 
wouldn't have any potential to have side effects on other modules/programs.


One of my personal choices when writing large applications (rather than 
scripts), is to make a local lib directory and prepend that to sys.path 
in the main application file before any local imports.


# Add a local lib to the search path.
lib = os.path.abspath(os.path.join(__file__, '..', 'lib'))
sys.path.insert(0, lib)

[Appliction dir not in PYthon path]
main_app_file.py
test.py
[lib]
[test package]
...   #test modules
...   #other local modules and packages

I then add a -test option to the main_app_file.py or a create test.py file 
at the same level as the main_app_file.  The test runner also needs to add 
lib to sys.path, but after that it can import and find any/all tests you 
want to run.  The test modules can use relative imports as long as they 
aren't circular.


* The error message in the case of circular imports could be much better!

Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Another relative imports question

2010-10-09 Thread Ron Adam



On 10/09/2010 12:39 PM, Martin v. Löwis wrote:

Am 09.10.2010 01:35, schrieb Greg Ewing:

Georg Brandl wrote:

The explanation is that everything that comes after import is
thereafter
usable as an identifier (or expression, in the case of dotted names) in
code.  .mymodule is not a valid expression, so the question would be
how
to refer to it.


I think a reasonable answer is that you should be able
to refer to it simply as 'mymodule'.


I don't think that's reasonable:

import xml.dom

doesn't give you dom, but xml.

So

import .dom

shouldn't give you dom, but . (which is nonsensical, of course).



I don't think it would be  import .dom, but...

from . import dom

It would be another module in xml doing the importing, so xml will have 
already been imported.


Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Return from generators in Python 3.2

2010-08-26 Thread Ron Adam


On 08/26/2010 07:25 PM, Guido van Rossum wrote:


That's not my experience. I wrote a trampoline myself (not released
yet), and found that I had to write a lot more code to deal with the
absence of yield-from than to deal with returns. In my framework,
users write 'raise Return(value)' where Return is a subclass of
StopIteration. The trampoline code that must be written to deal with
StopIteration can be extended trivially to deal with this. The only
reason I chose to use a subclass is so that I can diagnose when the
return value is not used, but I could have chosen to ignore this or
just diagnose whenever the argument to StopIteration is not None.


I'm currently playing around with a trampoline version based on the example 
in PEP 342.  Some of the things I found are ...



* In my version I seperated the trampoline from the scheduler.  Having it 
as a seperate class made the code cleaner and easier to read.  A trampoline 
instance can be ran without the scheduler.  (An example of this is below.)


The separate Scheduler only needs a few methods in a coroutine wrapper to 
run it. It really doesn't matter what's inside it as long as it
has a resume method that the scheduler can understand, and it returns in a 
timely manner so it doesn't starve other coroutines.



* I've found that having a Coroutine class, that has the generator methods 
on it, is very useful for writing more complex coroutines and generators.



* In a true trampoline, all sub coroutines are yielded out to the 
trampoline resume loop before their send method is called, so yield from 
isn't needed with a well behaved Trampoline runner.


I think yield from's value is that it emulates a trampolines performance 
without needing a stack to keep track of caller coroutines.  It also saves 
a lot of looping if you want to write coroutines with sub coroutines 
without a trampoline runner to run them on.



* Raising StopIteration(value) worked just fine for setting the last value.

Getting the value from the exception just before returning it is still a 
bit clumsy... I currently use.


 return exc.args[0] if exc.args else None

Maybe I've overlooked something?

My version of the Trampoline class handles the return value since it 
already has it handy when it gets a StopIteration exception, so the user 
doesn't need to this, they just need to yield the last value out the same 
as they do anywhere else.




I wonder if yield from may run into pythons stack limit?  For example...


Factorial Function.

def f(n, k=1):
if n != 0:
return f(n-1, k*n)
else:
return k

def factoral(n):
return f(n)

if __name__ == __main__:
print(factoral(1000))

This aborts with:
RuntimeError: maximum recursion depth exceeded in comparison



This one works just fine.


Factorial Trampoline.

from coroutine.scheduler import Trampoline

def tramp(func):
def wrap(*args, **kwds):
t = Trampoline(func(*args, **kwds))
return t.run()
return wrap


def f(n, k=1):
if n != 0:
yield f(n-1, k*n)
else:
yield k

@tramp
def factoral(n):
yield f(n)

if __name__ == __main__:
print(factoral(1))   #  extra zero too!


But if I add another zero, it begins to slow to a crawl as it uses swap 
space. ;-)


How would a yield from version compare?


I'm basically learning this stuff by trying to break this thing, and then 
trying to fix what breaks as I go.  That seems to be working. ;-)


Cheers,
   Ron Adam


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] No response to posts

2010-08-02 Thread Ron Adam



On 08/02/2010 03:57 AM, Stephen J. Turnbull wrote:

Ron Adam writes:

Something that may be more useful, is a no activity search field
with choices of day, week, month, year, etc... and make the output
sortable on time without activity.

That's exactly what a sort on date of activity gives you, though, and
it can be from longest down.


Yes, but when I do it, I either get a single specific day, or 2700 issues.



Also, I think that most of the date fields actually allow ranges
(iirc something like today - 1 year, today works, I've never
actually used them), but I don't think this can be easily negated in
the stock Roundup.  What could be done though is something like
created Jan 1 1970, today - 2 years AND open to find bugs that have
been hanging fire for more than two years.


Have you tried it?  I tried various different spelling and could only enter 
one specific day, today works, but 1 month or 2 years doesn't.


What does work is entering a partial date, ie...  2010-07 for all issues 
with activity this july. Or 2010 for all issues with activity this year.


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] No response to posts

2010-08-01 Thread Ron Adam



On 08/01/2010 06:14 PM, Terry Reedy wrote:

On 8/1/2010 7:44 AM, Éric Araujo wrote:

+1 On a prebuilt search

This is not as easy as it seems.
A nosy count of 1 misses posts where someone added themself as nosy
without saying anything, waiting for someone else to answer (and maybe
no one ever did). A message count of 1 misses posts where a person
follows up with a correction (because he cannot edit!) or addition.
nosy = 1 or message = 1 would be better, and one cannot do that from
search form, which, ANDS things together. Can a custom sql query do an OR?


There is an activity field which gets any issues with activity on a 
specific day.  I'm not sure how useful that is. shrug


Something that may be more useful, is a no activity search field with 
choices of day, week, month, year, etc... and make the output sortable on 
time without activity.


That not only would cover the short term cases of no response, but also the 
longer term items that slip though the cracks.


Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-29 Thread Ron Adam




FWIW, I am +1 on dropping tkinter interface.  Tkinter window looks
foreign next to browser and server-side GUI that opens a new client
window with each search topic does not strike me as most usable
design.  Furthermore, I just tried to use it on my OSX laptop and it
crashed after I searched for pydoc and clicked on the first entry.
(Another issue is that search window pops under the terminal window.)
I think Tkinter interface to pydoc may make sense in IDLE, but not in
the main pydoc GUI.  If the equivalent functionality is available in
the browser (preferably in the style familiar to docs.python.org
users, I don't see why we need to keep old GUI and hide new behind a
new option.


I agree.

What do you think of having a -i command line option to enter an 
interactive help session directly from the command line.


This is easy to do.  The instruction when entering and leaving need to 
change a bit, but that isn't hard to do.


Cheers,
   Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-26 Thread Ron Adam



On 07/25/2010 12:01 PM, Alexander Belopolsky wrote:

On Sun, Jul 25, 2010 at 12:32 PM, Ron Adamr...@ronadam.com  wrote: ..

I'd be completely fine with dropping the Search For box from the
GUI interface, but the persistent window listing the served port
and providing Open Browser and Quit Serving buttons still seems
quite useful even without the search box (When running python -m
pydoc -p 8080, it took me a moment to figure out how to kill the
server I had started).


Why not simply have  Quit Serving next to the search button in the
served pages?  The server can even serve a friendly page explaining how
it can be restarted before quitting. ..

Another way to communicate to the server would be to add a link in
the browser to open a server status page.  For example my router has a
configure page where I can check it's status and do other things.
That might be something worth exploring at some later date.


This would work as well, but for starters, I think Search and Quit
buttons next to each other will be most familiar to the users of the
current Tk-based control window.


Nick, what do you think about the Quit link in the browser along with 
improving the server startup message on the console window?


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-26 Thread Ron Adam



On 07/24/2010 10:44 PM, Nick Coghlan wrote:


To request automatic assignment of a local port number, -p 0 could
be made to work correctly (i.e. print out the actual port the OS
assigned rather than the 0 that was passed in on the command line as
it does currently).


I was able to implement this without too much trouble.  I also changed it 
so -b and -p switches can be used together to override the automatic port 
selection. The default is port 0.


I think idle uses automatic port selection now also. Does it have a way to 
select a specific port?  Is the -p option needed with automatic port selection.


I added a command prompt like the help prompt.  Here's what starting the 
server, opening a browser, and then stopping the server looks like.


r...@gutsy:/media/hda2/home/ra/svn/py3k$ ./python -m pydoc -p 0
Server ready at: http://localhost:50279/
Server commands: [b]rowser, [q]uit
serverb
serverq
Server stopped
[83680 refs]
r...@gutsy:/media/hda2/home/ra/svn/py3k$

I can add a [h]elp to get a longer help, but I'm not sure it's needed.

I want to have a [r]eport command to list the server requests since the 
last report command was given. Sort of like a log. It may be useful to find 
problems with what the server is sending and what the browser is requesting.


Should the server have a timeout in case the command window is not 
reachable?  If so, how long?


cheers,
   Ron



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-25 Thread Ron Adam



On 07/24/2010 10:38 PM, Nick Coghlan wrote:

On Sun, Jul 25, 2010 at 5:34 AM, Ron Adamr...@ronadam.com  wrote:



On 07/24/2010 05:37 AM, Nick Coghlan wrote:


On Sat, Jul 24, 2010 at 10:05 AM, Ron Adamr...@ronadam.comwrote:


I am not sure I like the fact that the browser is started automatically.
Please bring this up on python-dev.  This may be an opportunity to
rethink pydoc command line switches.  For example, -p and -g are
currently exclusive, but it would make sense for -g to start server on
the port specified by -p.


So are any thoughts on starting the web browser automatically, and on how
the -g and -p command line switches work?


My suggestion:

- leave the -g option alone (including the tk gui), but make sure
other options still work when tk is unavailable


I was hoping it would be ok to drop the tk gui in pydoc.  Keeping it
requires rewriting the tk gui interface to use the new server because the
server code and the gui code are not cleanly separate.  I can do this if
it's really wanted. (Nothing against tKinter, I use it for my own gui apps.)


I'd be completely fine with dropping the Search For box from the GUI
interface, but the persistent window listing the served port and
providing Open Browser and Quit Serving buttons still seems quite
useful even without the search box (When running python -m pydoc -p
8080, it took me a moment to figure out how to kill the server I
had started). You could even tidy it up a bit and include things like
the Python version in the GUI window.


Good ideas.



Specifying both -b and -g should open both the (limited) GUI and the
web browser.


I see what you mean.  The point is to always enable a way to communicate to 
the server directly.  I've fairly often have had to kill the server with 
the task manager. or on ubuntu, use the system monitor to stop the server. 
 That's definitely not novice friendly.


Currently the port is printed on the console when started from the console, 
but there isn't a clean way to stop the server and no message to say how.


The gui is needed for when pydoc is started without a console ie.. by an 
icon in windows.



Another way to communicate to the server would be to add a link in the 
browser to open a server status page.  For example my router has a 
configure page where I can check it's status and do other things.  That 
might be something worth exploring at some later date.




Here are some thoughts...

The gui window can have a panal to display the server output.  Currently it 
goes to the console window which may not be visible. This would replace the 
search results panel.


If the gui window is not opened, ie... tk isn't installed or the user 
prefers the console, have the console window accept commands that 
correspond to the tk window.


Both can have a way to turn on and off verbosity which will display the 
server activity.



Cheers,
  Ron

I'll be away from my computer most of today,  I will start making some of 
the changes we have been discussing later tonight.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-24 Thread Ron Adam



On 07/24/2010 10:16 AM, Alexander Belopolsky wrote:

On Sat, Jul 24, 2010 at 6:37 AM, Nick Coghlanncogh...@gmail.com  wrote:
..

For the -b option, if the server is already running (and hence the
port is in use), catch the exception, print a message and start the
webbrowser anyway.


I was going to make a similar suggestion, but then realized that there
it may not be easy or desirable for pydoc to figure out whether the
service running on the used port is in fact pydoc.  Any query that
pydoc would send may be disruptive depending on what program is
listening on the port.  It may also get easily confused by a pydoc
service from a different version of python.   It may be better to
search for an unused port in the error case and pass it to the
browser.


I'll try to look into improving how pydoc handles these types of errors. In 
the mean time if others have experience with browser apps and these kind of 
situations I'd like to hear about it.


Does this have to be in this particular patch?  I don't see any problem 
adding better error recovery later.  This isn't something new, both the 
-p and -g modes have this issue.


Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-24 Thread Ron Adam



On 07/24/2010 05:37 AM, Nick Coghlan wrote:

On Sat, Jul 24, 2010 at 10:05 AM, Ron Adamr...@ronadam.com  wrote:

I am not sure I like the fact that the browser is started automatically.
Please bring this up on python-dev.  This may be an opportunity to
rethink pydoc command line switches.  For example, -p and -g are
currently exclusive, but it would make sense for -g to start server on
the port specified by -p.


So are any thoughts on starting the web browser automatically, and on how
the -g and -p command line switches work?


My suggestion:

- leave the -g option alone (including the tk gui), but make sure
other options still work when tk is unavailable


I was hoping it would be ok to drop the tk gui in pydoc.  Keeping it 
requires rewriting the tk gui interface to use the new server because the 
server code and the gui code are not cleanly separate.  I can do this if 
it's really wanted. (Nothing against tKinter, I use it for my own gui apps.)


Or are you suggesting having pydoc work either with the tk gui behavior 
without any of the new features, or with the new features without the tk 
gui, depending on how it's started?  I'd prefer not to do this because it 
would duplicate the server code and possibly other functions to produce 
some of the web page outputs.  That would make pydoc.py both larger and 
harder to maintain. It may also make enhancing pydoc further more difficult 
as well.


The current patch without the tk gui definitely makes things easier to 
maintain IMHO.


Are there any compelling reasons for keeping the tk gui?



BTW, the synopsis search feature is currently broken in python 3.2.  See 
issue:  http://bugs.python.org/issue9319


Once that is fixed, you can then play around with the search features with 
and without this patch and see how they compare.




- add a -b option to start the server and open the webbrowser automatically
- allow -p to be combined with either -b or -g to specify where
the server should run (or is running)


I also agree the -p option should work with the -b and/or -g.

Using -b instead of reusing -g for browser only, makes sense to me.

Depending on weather or not the tk gui is kept, the -g option can either 
open the tk gui or give a message to use the -b option instead.



Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-24 Thread Ron Adam



On 07/24/2010 04:29 PM, Alexander Belopolsky wrote:

On Sat, Jul 24, 2010 at 3:34 PM, Ron Adamr...@ronadam.com  wrote:



On 07/24/2010 05:37 AM, Nick Coghlan wrote:

..

- leave the -g option alone (including the tk gui), but make sure
other options still work when tk is unavailable


I was hoping it would be ok to drop the tk gui in pydoc.  Keeping it
requires rewriting the tk gui interface to use the new server because the
server code and the gui code are not cleanly separate.  I can do this if
it's really wanted. (Nothing against tKinter, I use it for my own gui apps.)


FWIW, I am +1 on dropping tkinter interface.  Tkinter window looks
foreign next to browser and server-side GUI that opens a new client
window with each search topic does not strike me as most usable
design.  Furthermore, I just tried to use it on my OSX laptop and it
crashed after I searched for pydoc and clicked on the first entry.
(Another issue is that search window pops under the terminal window.)
I think Tkinter interface to pydoc may make sense in IDLE, but not in
the main pydoc GUI.  If the equivalent functionality is available in
the browser (preferably in the style familiar to docs.python.org
users, I don't see why we need to keep old GUI and hide new behind a
new option.


The information returned by the new find field in the browser navigation 
bar is the same as that returned to the tk gui window. Each item is a link 
followed by the synopsis.  The style is similar to the other pydoc pages 
with the navigation bar at the top that makes it easy to do other searches 
or return to an index page.


There should be a link on each pydoc module page to the online docs.  I 
have to look at how pydoc decides to include that or not.  The patched 
version does not include it.  I don't think I did anything to remove that, 
but will check and add it back if I did.


Both python 2.6 and 3.1 versions of pydoc currently adds a Module Docs 
link to module pages to the 2.7 online docs.  (A bug?, I'm using Ubuntu)


Having the search results in the browser instead of the tk gui allows you 
to print the results and you can also right click on the links and choose 
open in a new tab or window.   (Firefox Browser)


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] [isssue 2001] Pydoc enhancement patch questions

2010-07-23 Thread Ron Adam


This regards the following feature request.

http://bugs.python.org/issue2001

Summery:

The pydoc gui enhancement patch adds a navigation bar on each page with 
features that correspond to abilities that are currently available to help. 
 ie.. a get field, to get help on an item, a find field, to search for 
modules by keyword, plus modules, topics, and keyword index links.


The file links read the python file in as text and inserts it into a web 
page instead of relying on the browser to do the right thing. (often enough 
it doesn't)


To achieve this I reworked the existing pydoc server into one that can 
respond to a navigation bar at the top of the served pages.  The new 
local_text_server will exist in the http package where the other server 
related modules are.  The callback function passed to the server does all 
the pydoc specific work so the server itself is a simple general purpose 
text server useful for making browser interface front ends.


This also removed the need for tkinter in pydoc and allows pydoc -g to work 
on computers where tk isn't installed.



As per discussion on the tracker:


Alexander Belopolsky belopol...@users.sourceforge.net added the comment:

+:program:`pydoc` :option:`-g` will start the server and additionally open a web
+browser to a module index page.  Each served page has a navagation bar at the
+top where you can 'get' help on a individual item, 'find' all modules with a
+keyword in thier synopsis line, and goto indexes for 'modules', 'topics' and
+'keywords'.

I am not sure I like the fact that the browser is started automatically.
Please bring this up on python-dev.  This may be an opportunity to
rethink pydoc command line switches.  For example, -p and -g are
currently exclusive, but it would make sense for -g to start server on
the port specified by -p.


So are any thoughts on starting the web browser automatically, and on how 
the -g and -p command line switches work?


I'm also not sure about the name for the server.  I've use 
local_text_server, but it may also be useful in non-local cases as well.


The newest patch is...

 http://bugs.python.org/file18165/pydoc_server3.diff

Any feedback will be welcome.

Ron








___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mkdir -p in python

2010-07-20 Thread Ron Adam



On 07/20/2010 10:43 AM, Fred Drake wrote:

On Tue, Jul 20, 2010 at 9:09 AM, Steven D'Apranost...@pearwood.info  wrote:

It refers to the guideline that you shouldn't have a single function
with two (or more) different behaviour and an argument that does
nothing but select between them.


In particular, when that argument is almost never given a variable
value, but is specified using a constant at the call site.


I was thinking it should have been two functions, but I realized there is 
more subtleties involved than simply just reusing a directory that already 
exists.



One possibility might be...

mkdir(path [, allow=None, mode=0777])

Where None can be replaced with one or more of the following.

'exists' dir can already exist  (with same permissions as mode)
'case'   dir case can be different. (Windows)
'files'  dir can have files in it.

... or a string containing the initials.


It doesn't fall under the single constant rule if done this way.


Ron







___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mkdir -p in python

2010-07-20 Thread Ron Adam



On 07/20/2010 11:47 AM, Ron Adam wrote:



On 07/20/2010 10:43 AM, Fred Drake wrote:

On Tue, Jul 20, 2010 at 9:09 AM, Steven D'Apranost...@pearwood.info
wrote:

It refers to the guideline that you shouldn't have a single function
with two (or more) different behaviour and an argument that does
nothing but select between them.


In particular, when that argument is almost never given a variable
value, but is specified using a constant at the call site.


I was thinking it should have been two functions, but I realized there
is more subtleties involved than simply just reusing a directory that
already exists.


One possibility might be...

mkdir(path [, allow=None, mode=0777])

Where None can be replaced with one or more of the following.

'exists' dir can already exist (with same permissions as mode)
'case' dir case can be different. (Windows)
'files' dir can have files in it.



Add

  'path', Complete path as -p option does.



... or a string containing the initials.


It doesn't fall under the single constant rule if done this way.


Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] mkdir -p in python

2010-07-20 Thread Ron Adam



On 07/20/2010 12:00 PM, Fred Drake wrote:

On Tue, Jul 20, 2010 at 12:47 PM, Ron Adamr...@ronadam.com  wrote:

It doesn't fall under the single constant rule if done this way.


If the value for 'allow' were almost always given as a constant, this
would be an argument for three functions instead of one.

The guideline has little to do with the type of the value, but the
number of possible values (small) and whether they're normally given
as constants in the code.

If there's more than one, and combinations then to vary, then keeping
them as args makes sense.

Also, if we don't know what we want the functionality to be, as you
suggest, then worry about that is premature.  :-)  Let's decide on the
required functionality first.


That makes sense. :-)

Another things that comes to mind, is it may make sense to choose either 
strict, and have args to *allow* different cases, or to choose lenient, and 
have args to *restrict* different cases.


That keeps it somewhat less confusing, and doesn't require memorization to 
remember what the mixed mode default might be.  (Unless we use a single 
combination 99% of the time, then that probably *should* be the default.)


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing IDLE from the standard library

2010-07-15 Thread Ron Adam



On 07/15/2010 07:13 AM, Nick Coghlan wrote:

On Thu, Jul 15, 2010 at 7:23 PM, Oleg Broytmanp...@phd.pp.ru  wrote:

   Sorry for being a wet blanket but vim implements tabbed windows even in
console (text) mode. (-:


Oh, I know vim and emacs are actually incredibly powerful once you
learn how to use them. I'm just a child of the GUI generation and
believe UIs should be readily discoverable in accordance with
http://xkcd.com/627/. I've tried to apply that technique to both
applications in the past and failed miserably (although I will admit I
haven't had the inclination to even try to use either of them for
years now). Neither really strikes me as just a text editor, but more
a way of life ;)

Anyway, to bring this back on topic...

Neither Kate nor Notepad++ allow you to easily move documents between
windows (you have to open them explicitly in the second window). I had
never noticed the lack until I explicitly checked for it after
Stephen's last email. I suspect whether you consider this a must have
feature or not depends greatly on your personal workflow when coding.
If IDLE were to adopt a tabbed view without easy migration between
separate windows, it would have plenty of company.


My preference would to just have two tabs and/or panes per window.  One for 
the source listing and one for the command interface.


It's easy enough to have multiple idle windows open and cut and copy 
between them.  The file window could have an open in new window option 
which would start a new idle instance.


That keeps things simple and would be easy enough to understand by the 
majority of new/novice to computer/programming users.


It also avoid having the command window shared by multiple programs which 
could cause all sorts of messy output or errors if multiple scripts are 
being run at the same time.


Ron


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing IDLE from the standard library

2010-07-12 Thread Ron Adam



On 07/12/2010 01:21 PM, Ian Bicking wrote:

On Sun, Jul 11, 2010 at 3:38 PM, Ron Adam r...@ronadam.com
mailto:r...@ronadam.com wrote:

There might be another alternative.

Both idle and pydoc are applications (are there others?) that are in
the standard library.  As such, they or parts of them, are possibly
importable to other projects.  That restricts changes because a
committer needs to consider the chances that a change may break
something else.

I suggest they be moved out of the lib directory, but still be
included with python.  (Possibly in the tools directory.)  That
removes some of the backward compatibility restrictions or at least
makes it clear there isn't a need for backward compatibility.


I also like this idea.  This means Python comes with an IDE out of he
box but without the overhead of a management and release process that
is built for something very different than a GUI program (the standard
library).  This would mean that IDLE would be in site-packages, could
easily be upgraded using normal tools, and maybe most importantly it
could have its own community tools and development process that is more
casual (and can more easily integrate new contributors) and higher
velocity of changes and releases.  Python releases would then ship the
most recent stable release of IDLE.


Yes, if you follow the guide lines for the rest of the library, anything 
that is removed needs to be depreciated first and anything thats added 
needs to be carefully looked at to be sure it doesn't break anything that 
may depend on it.  That is good for the rest of the standard library but 
really slows things down for an application like idle.  Just removing those 
restrictions would make things a lot simpler and speed things up 
considerably I think.


The site-packages directory is still in the lib path and so things there 
are still importable.  That is why I suggested the tools directory. 
Another place would be in the same directory the python executable lives.


But the exact location isn't really the important thing, having a clear 
policy on how the upgrade policy differs from the python library is what is 
needed.


Ron




Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing IDLE from the standard library

2010-07-11 Thread Ron Adam



On 07/10/2010 06:05 PM, Tal Einat wrote:

Hello,

I would like to propose removing IDLE from the standard library.

I have been using IDLE since 2002 and have been doing my best to help
maintain and further develop IDLE since 2005.

In recent years IDLE has received negligible interest and attention from
the Python community. During this time IDLE has slowly gone downhill.
The documentation and tutorials grow increasingly out of date.
Cross-platform support has degraded with the increasing popularity of
OSX and 64-bit platforms. Bugs take months, and sometimes more than a
year, to be solved. Features that have since become common-place, such
as having a non-intrusive search box instead of a dialog, are obviously
and painfully lacking, making IDLE feel clumsy and out-dated.

For these reasons, I think it would be fitting to remove IDLE from the
standard library. IDLE is no longer recommended to beginners, IMO
rightfully so, and this was the main reason for its inclusion in the
standard library. Furthermore, if there is little or no interest in
developing and maintaining IDLE, it should be removed to avoid having
buggy and badly supported software in the standard library.



There might be another alternative.

Both idle and pydoc are applications (are there others?) that are in the 
standard library.  As such, they or parts of them, are possibly importable 
to other projects.  That restricts changes because a committer needs to 
consider the chances that a change may break something else.


I suggest they be moved out of the lib directory, but still be included 
with python.  (Possibly in the tools directory.)  That removes some of the 
backward compatibility restrictions or at least makes it clear there isn't 
a need for backward compatibility.


Would a change of this sort help make things any easier?

(Note: idle isn't in the lib directory on Ubuntu.)

Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] query: docstring formatting in python distutils code

2010-07-10 Thread Ron Adam



On 07/07/2010 12:30 PM, Georg Brandl wrote:

Am 07.07.2010 18:09, schrieb Michael Foord:


I would say that the major use of docstrings is for interactive help - so
interactive readability should be *the most important* (but perhaps not only)
factor when considering how to format standard library docstrings.


Agreed.  However, reST doesn't need to be less readable if the specific
inline markup is not used.  For example, using `identifier` to refer to a
function or *var* to refer to a variable (which is already done at quite a
few places) is very readable IMO.  Using ``code`` also isn't bad, considering
that double quotes are not much different and potentially ambiguous.

Overall, I think that we can make stdlib docstrings valid reST -- even if it's
reST without much markup -- but valid, so that people pulling in stdlib doc-
strings into Sphinx docs won't get ugly warnings.

What I would *not* like to see is heavy markup and Sphinx specifics -- that
would only make sense if we included the docstrings in the docs, and I don't
see that coming.


I also agree that interactive help should be the most important factor when 
writing doc strings for the standard library.


Are there any plans to change how pydoc's GUI mode works?  Could it use a 
minimal set of reST to improve it's display?


The patch I submitted (*which is waiting to be reviewed) extends the GUI 
mode so it can show the same info that is available from the help() function.


http://bugs.python.org/issue2001

I think the only issues with this patch are what new functions and classes 
should be part of the public api.



* BTW... The bug trackers main links to items with patches, and needing 
review, doesn't pick up feature requests.


Ron







___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.7b1 and argparse's version action

2010-04-18 Thread Ron Adam



On 04/18/2010 05:57 PM, Nick Coghlan wrote:

Steven Bethard wrote:

By the way, we could simplify the typical add_argument usage by adding
show program's version number and exit as the default help for the
'version' action. Then you should just write:

 parser.add_argument('--version', action='version', version='the version')


With that change, I would have no problem with the current argparse
behaviour (since doing it this way makes it very easy for people to add
a -V shortcut if they want one).



+1   This sounds good to me also.


Note that the python interpreter uses -V and --version.

r...@gutsy:~$ python3.1 -V
Python 3.1.2
r...@gutsy:~$ python3.1 --version
Python 3.1.2

And -v is used as follows:

-v : verbose (trace import statements); also PYTHONVERBOSE=x
 can be supplied multiple times to increase verbosity

Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 2.7b1 and argparse's version action

2010-04-18 Thread Ron Adam



On 04/18/2010 06:35 PM, Nick Coghlan wrote:

Steven Bethard wrote:

On Sun, Apr 18, 2010 at 3:57 PM, Nick Coghlanncogh...@gmail.com  wrote:

Steven Bethard wrote:

By the way, we could simplify the typical add_argument usage by adding
show program's version number and exit as the default help for the
'version' action. Then you should just write:

 parser.add_argument('--version', action='version', version='the version')

With that change, I would have no problem with the current argparse
behaviour (since doing it this way makes it very easy for people to add
a -V shortcut if they want one).


Probably this should happen regardless of the outcome of the
constructor argument. The only reason it wasn't already there is that
I hadn't thought of it. ;-)


Crazy thought... would it make sense to have the following implicitly
use --version as the option flag:

   parser.add_argument(action='version', version='details')

There are two things about the explicit '--version' that bother me:
1. It reduces the automatic provision of standard option spellings


I think any non standard spellings will either be on purpose or be caught 
fairly quickly.  And in either case I can't imagine it having an impact on 
the rest of the program, so I wouldn't worry about this too much.




2. The repetition in reading/writing 'version' 3 times is kind of annoying


Ahh, but it isn't the same exact 'version' each time.  One is an input 
specifier string, one is an action key, and the last is a value name.




(Probably a bad idea, since adding -V would mean having to add
--version as well, but figured it was worth mentioning).


I agree. Even though it may seem redundant when writing it, it's both clear 
and explicit when reading it even if you aren't very familiar with how 
argparse works, or have just returned from a really long and fun vacation. ;-)


Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Bootstrap script for package management tool in Python 2.7 (Was: Re: At least one package management tool for 2.7)

2010-03-29 Thread Ron Adam



anatoly techtonik wrote:

So, there won't be any package management tool shipped with Python 2.7
and users will have to download and install `setuptools` manually as
before:

  search - download - unzip - cmd - cd - python
setup.py install


Therefore I still propose shipping bootstrap package that instruct
user how to download and install an actual package  management tool
when users tries to use it. So far I know only one stable tool -
`easy_install` - a part of `setuptools` package.

The required behavior for very basic user friendliness:
1. user installs Python 2.7
2. user issues `python -m easy_install something`
3. user gets message
'easy_install' tool is not installed on this system. To make it
available, download and install `setuptools` package from
http://pypi.python.org/pypi/setuptools/

4. the screen is paused before exit (for windows systems)

Other design notes:
1. if package tries to import `easy_install` module used for
bootstrap, it gets the same ImportException as if there were no
`easy_install` at all
2. bootstrap module is overwritten by actual package when users installs it


So, do we need a PEP for that? How else can I know if consensus is
reached? Anybody is willing to elaborate on implementation?


P.S. Please be careful to reply to relevant lists


An even lighter option would be to add an item to pythons 'help' feature.

Currently help(PACKAGES) == help(import)

It may be enough at this time to add a PACKAGES help entry that gives an 
overview of packages and hints on installing them.  Then import can be a 
related help topic for PACKAGES.


Ron







___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __pycache__ creation

2010-03-24 Thread Ron Adam



Nick Coghlan wrote:

Ron Adam wrote:

I think I misunderstood this at first.

It looks like, while developing a python 3.2+ program, if you don't
create an empty __pycache__ directory, everything will still work, you
just won't get the .pyc files.  That can be a good thing during
development because you also will not have any problems with old .pyc
files hanging around if you move or rename files.


The behaviour you described (not creating __pycache__ automatically) was
just a suggestion in this thread.

The behaviour in the actual PEP (and what will be implemented for 3.2+)
is to create __pycache__ if it is missing.

Cheers,
Nick.


OK  :-)

h... unless there is a __pycache__ *file* located there first. ;-)

Not that I can think of any good reason to do that at this moment.

Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __pycache__ creation

2010-03-23 Thread Ron Adam



Terry Reedy wrote:

On 3/22/2010 2:15 PM, Antoine Pitrou wrote:

What I am proposing is that the creation of __pycache__ /directories/ 
be put
outside of the core. It can be part of distutils, or of a separate 
module, or

delegated to third-party tools. It could even be as simple as
python -m compileall --pycache, if someone implements it.

Creation of the __pycache__ /contents/ (files inside the directory) 
would still
be part of core Python, but only if the directory exists and is 
writable by the

current process.


-1

If, as I have done several times recently, I create a directory and 
insert an empty __init__.py and several real module.py files, I want the 
.pycs to go into __pycache__ *automatically, by default, without me also 
having to remember to create an empty __pycache__ *directory*, *each 
time*.  Ugh.


I think I misunderstood this at first.

It looks like, while developing a python 3.2+ program, if you don't create 
an empty __pycache__ directory, everything will still work, you just won't 
get the .pyc files.  That can be a good thing during development because 
you also will not have any problems with old .pyc files hanging around if 
you move or rename files.


The startup time may just be a tad longer, but probably not enough to be 
much of a problem.  If it is a problem you can just create the __pycache__ 
directory, but nothing bad will happen if you don't.


Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __pycache__ creation

2010-03-22 Thread Ron Adam



Antoine Pitrou wrote:

Isaac Morland ijmorlan at uwaterloo.ca writes:

IMO, all these issues militate for putting __pycache__ creation out of
the interpreter core, and in the hands of third-party package-time/
install-time tools (or distutils).
Speaking only for myself, but really for anybody who likes tidy source 
directories, I hope some version of the __pycache__ proposal becomes part 
of standard Python, by which I ideally mean it's enabled by default but if 
that is just not a good idea then at most it should be required to set a 
command-line option to get this feature.


This doesn't contradict by my proposal.

What I am proposing is that the creation of __pycache__ /directories/ be put
outside of the core. It can be part of distutils, or of a separate module, or
delegated to third-party tools. It could even be as simple as
python -m compileall --pycache, if someone implements it.

Creation of the __pycache__ /contents/ (files inside the directory) would still
be part of core Python, but only if the directory exists and is writable by the
current process.


+1

If I understand correctly, we would have the current mode as the default, 
and can trigger __pycache__ behavior simply by manually creating a 
__pycache__ directory and deleting any byte-code files in the 
module/program directory.


I like this, it is easy to understand and can be used without messing with 
flags or environment variables.


Ron









___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __pycache__ creation

2010-03-22 Thread Ron Adam



Barry Warsaw wrote:

On Mar 22, 2010, at 02:02 PM, Ron Adam wrote:


If I understand correctly, we would have the current mode as the default, and
can trigger __pycache__ behavior simply by manually creating a __pycache__
directory and deleting any byte-code files in the module/program directory.

I like this, it is easy to understand and can be used without messing with 
flags or environment variables.


Well, for a package with subpackages, it gets more complicated.  Definitely
not something you're likely to do manually.  Antoine's suggestion of 'python
-m compileall --pycache' would work, but I think it's also obscure enough that
most Python users won't get the benefit.



May be a bit more complicated, but it should be easy to write tools to 
handle the repetitive stuff.


When I'm writing python projects I usually only work on one or two packages 
at a time at most.  Creating a couple of directories when I first get 
started on a project is nothing compared with the other 10 or more files of 
 1,000 lines of code each.  This has a very low mental hurdle too.


All the other packages I'm importing would probably already be preinstalled 
complete with __pycache__ directories and bytecode files.  As Antoine was 
suggesting that it would be up to the installer's (scrips) to create the 
__pycache__ directories at install time, either directly or by possibly 
issuing a 'python -m compileall --pycache' command?


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __pycache__ creation

2010-03-22 Thread Ron Adam


Guido van Rossum wrote:

On Mon, Mar 22, 2010 at 12:20 PM, Barry Warsaw ba...@python.org wrote:

On Mar 22, 2010, at 02:02 PM, Ron Adam wrote:


If I understand correctly, we would have the current mode as the default, and
can trigger __pycache__ behavior simply by manually creating a __pycache__
directory and deleting any byte-code files in the module/program directory.



Huh? Last time I looked weren't we going to make __pycache__ the
default (and eventually only) behavior?


I expect that the __pycache__ directories would quickly become the 
recommended defacto default for writing and preinitializing modules and 
packages,  with the current behavior of having bytecode in the same 
directory as the .py files only as the fall-back (what I meant by default) 
behavior when the __pycache__ directories do not exist.




I see only two reasonable solutions for __pycache__ creation -- either
we change all setup/install scripts (both for core Python and for 3rd
party packages) to always create a __pycache__ subdirectory for every
directory (including package directories) installed; or we somehow
create it the first time it's needed.



But creating it as needed runs into at least similar problems with
ownership as creating .pyc files when first needed (if the parent
directory is root-owned a mere mortal can't create it at all).



So even apart from the security issue (which I haven't thought about deeply) I
think precreation should at least be an easily accessible option both
for the core (where it can be done by compileall) and for 3rd party
packages (where I guess it's up to distutils or whatever install
mechanism is used).


Yes, I think that is what Antoine was also getting at.


Is there a need for python to use __pycache__ directories 100% of the time? 
  For 2.x it seems like being flexible would be best, and if 3.x is going 
to be strict about it, it should be strict sooner than later rather than 
have a lot of 3rd party packages break at some point down the road.


Ron

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] argparse ugliness

2010-03-08 Thread Ron Adam



Steven Bethard wrote:

On Mon, Mar 8, 2010 at 7:40 AM, Steven Bethard steven.beth...@gmail.com wrote:

On Sun, Mar 7, 2010 at 11:49 AM, Guido van Rossum gu...@python.org wrote:

On Sun, Mar 7, 2010 at 4:29 AM, Neal Becker ndbeck...@gmail.com wrote:

Brian Curtin wrote:


On Fri, Mar 5, 2010 at 12:51, Neal Becker ndbeck...@gmail.com wrote:


I generally enjoy argparse, but one thing I find rather
ugly and unpythonic.

   parser.add_argument ('--plot', action='store_true')

Specifying the argument 'action' as a string is IMO ugly.


What else would you propose?
FWIW, this is the same in optparse.

I would have thought use the object itself, instead of a string that spells
the object's name.

What object? How would you write the example instead then?

In argparse, unlike optparse, actions are actually defined by objects
with a particular API, and the string is just a shorthand for
referring to that. So:

 parser.add_argument ('--plot', action='store_true')

is equivalent to:

 parser.add_argument('--plot', argparse._StoreTrueAction)


Sorry, that should have been:

  parser.add_argument('--plot', action=argparse._StoreTrueAction)


Because the names are so long and you'd have to import them, I've left
them as private attributes of the module, but if there's really
demand, we could rename them to argparse.StoreTrueAction, etc.


I like the strings.  They are simple and easy to use/read and they don't 
have to be created or imported before the parser is defined.  That allows 
me to put the parser setup right after the 'if __name__ == '__main__':' so 
it's easy to find and read.  It also allows me to not import or create 
objects that may be expensive in either time or resources if I'm not going 
to use them.


Also the strings help separate the parser from the rest of your program in 
a much cleaner way. This can make your programs more modular and easier to 
modify and maintain.


Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __file__

2010-03-01 Thread Ron Adam



Nick Coghlan wrote:

Michael Foord wrote:

Can't it look for a .py file in the source directory first (1st stat)?
When it's there check for the .pyc in the cache directory (2nd stat,
magic number encoded in filename), if it's not check for .pyc in the
source directory (2nd stat + read for magic number check).  Or am I
missing a subtlety?

The problem is doing this little dance for every path on sys.path.


To unpack this a little bit for those not quite as familiar with the
import system (and to make it clear for my own benefit!): for a
top-level module/package, each path on sys.path needs to be eliminated
as a possible location before the interpreter can move on to check the
next path in the list.

So the important number is the number of stat calls on a miss (i.e.
when the requested module/package is not present in a directory).
Currently, with builtin support for bytecode only files, there are 3
checks (package directory, py source file, pyc/pyo bytecode file) to be
made for each path entry.

The PEP proposes to reduce that to only two in the case of a miss, by
checking for the cached pyc only if the source file is present (there
would still be three checks for a hit, but that only happens at most
once per module lookup).

While the PEP is right in saying that a bytecode-only import hook could
be added, I believe it would actually be a little tricky to write one
that didn't severely degrade the performance of either normal imports or
bytecode-only imports. Keeping it in the core import, but turning it off
by default seems much less likely to have unintended performance
consequences when it is switched back on.

Another option is to remove bytecode-only support from the default
filesystem importer, but keep it for zipimport (since the stat call
savings don't apply in the latter case).


What if ... a bytecode-only mode is triggered by __main__ loading from a 
bytecode file, otherwise the .py files are needed and are checked to make 
sure the bytecode files are current.


Ron





___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __file__

2010-02-26 Thread Ron Adam


Barry Warsaw wrote:

On Feb 26, 2010, at 02:09 PM, Brett Cannon wrote:


But a benefit of no longer supporting bytecode-only modules by default is it
cuts back on possible stat calls which slows down Python's startup time (a
complaint I hear a lot). Performance issues become even more acute if you try
to come up with even a remotely proper way to have backwards-compatible
support in importlib for its ABCs w/o forcing caching on all implementors of
the ABCs.



And personally, I don't see what bytecode-only modules buy you. The
obfuscation argument is bunk as we all know.


Brett really hits the nail on the head, and yes I'm sorry for not being clear
about what we discussed this at Pycon meant.  The we being Brett and I of
course (and Chris Withers IIRC).

Bytecode-only deployments are a bit of a sham, and definitely a minority use
case, so why should all of Python pay for the extra stat calls to support this
by default?  How many people would actually be hurt if this wasn't available
out of the box, especially since you can still support it if you really want
it and can't convince your manager that it provides essentially zero useful
obfuscation of your code?

I say this having been down that road myself with a previous employer.
Management was pretty adamant about wanting this until I explained how easy it
was to defeat and convinced them that the engineering resources to do it were
better spent elsewhere.

Having said that, I'd be all for including a reference implementation of a
bytecode-only loader in the PEP for demonstration purposes.  Greg, would you
like to contribute that?

-Barry



Micheal Foords view point on this strikes me as the most realistic. Some 
people do find it to be a value for their particular needs and circumstance.


Michael Foord Wrote:
For many use-cases some protection is enough. After all *any* DRM or 
source-code obfuscation is breakable in the medium / long term - so just

 enough to discourage the casual looker is probably sufficient. The fact
 that bytecode only distributions exist speaks to that.

Whether you believe that allowing companies who ship bytecode is a 
disservice to them or not is fundamentally irrelevant. If they believe

it is a service to them then it is... :-)



To possibly qualify it a bit more:

It does not make sense (to me) to have byte code only modules and packages 
in python's lib directory.  The whole purpose (as far as I know) is for 
modules and packages located there to be shared.  And as such, the source 
file becomes a source of documentation.  Not supporting bytecode only 
python modules and packages in pythons lib directory may be good.


For python programs located and installed elsewhere I think Michaels view 
point is applicable. For some files that are not meant to be shared, some 
form of discouragement can be a feature.


Ron Adam













___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   >