Re: [Python-Dev] (no subject)

2019-04-10 Thread MRAB

On 2019-04-10 22:00, Terry Reedy wrote:

On 4/10/2019 7:24 AM, Robert Okadar wrote:

Hi community,

I have developed a tkinter GUI component, Python v3.7. It runs very well in
Linux but seeing a huge performance impact in Windows 10. While in Linux an
almost real-time performance is achieved, in Windows it is slow to an
unusable level.

The code is somewhat stripped down from the original, but the performance
difference is the same anyway. The columns can be resized by clicking on
the column border and dragging it. Resizing works only for the top row (but
it resizes the entire column).
In this demo, all bindings are avoided to exclude influence on the
component performance and thus not included. If you resize the window
(i.e., if you maximize it), you must call the function table.fit() from
IDLE shell.

Does anyone know where is this huge difference in performance coming from?
Can anything be done about it?


For reasons explained by Steve, please send this instead to python-list
https://mail.python.org/mailman/listinfo/python-list
To access python-list as a newsgroup, skip comp.lang.python and use
newsgroup gmane.comp.python.general at news.gmane.org.

I will respond there after testing/verifying and perhaps searching
bugs.python.org for a similar issue.


ttk has Treeview, which can be configured as a table.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2019-04-10 Thread Terry Reedy

On 4/10/2019 7:24 AM, Robert Okadar wrote:

Hi community,

I have developed a tkinter GUI component, Python v3.7. It runs very well in
Linux but seeing a huge performance impact in Windows 10. While in Linux an
almost real-time performance is achieved, in Windows it is slow to an
unusable level.

The code is somewhat stripped down from the original, but the performance
difference is the same anyway. The columns can be resized by clicking on
the column border and dragging it. Resizing works only for the top row (but
it resizes the entire column).
In this demo, all bindings are avoided to exclude influence on the
component performance and thus not included. If you resize the window
(i.e., if you maximize it), you must call the function table.fit() from
IDLE shell.

Does anyone know where is this huge difference in performance coming from?
Can anything be done about it?


For reasons explained by Steve, please send this instead to python-list
https://mail.python.org/mailman/listinfo/python-list
To access python-list as a newsgroup, skip comp.lang.python and use 
newsgroup gmane.comp.python.general at news.gmane.org.


I will respond there after testing/verifying and perhaps searching 
bugs.python.org for a similar issue.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2019-04-10 Thread Robert Okadar
Hi Steven,

Thank you for pointing me in the right direction. Will search for help on
places you mentioned.

Not sure how can we help you with developing the Python interpreter, as I
doubt we have any knowledge that this project might use it. When I say
'we', I mean on my colleague and me.

All the best,
--
Robert Okadar
IT Consultant

Schedule an *online meeting * with
me!

Visit *aranea-mreze.hr*  or call
* +385 91 300 8887*


On Wed, 10 Apr 2019 at 17:36, Steven D'Aprano  wrote:

> Hi Robert,
>
> This mailing list is for the development of the Python interpreter, not
> a general help desk. There are many other forums where you can ask for
> help, such as the comp.lang.python newsgroup, Stackoverflow, /r/python
> on Reddit, the IRC channel, and more.
>
> Perhaps you can help us though, I presume you signed up to this mailing
> list via the web interface at
>
> https://mail.python.org/mailman/listinfo/python-dev
>
> Is there something we could do to make it more clear that this is not
> the right place to ask for help?
>
>
> --
> Steven
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2019-04-10 Thread Steven D'Aprano
Hi Robert,

This mailing list is for the development of the Python interpreter, not 
a general help desk. There are many other forums where you can ask for 
help, such as the comp.lang.python newsgroup, Stackoverflow, /r/python 
on Reddit, the IRC channel, and more.

Perhaps you can help us though, I presume you signed up to this mailing 
list via the web interface at

https://mail.python.org/mailman/listinfo/python-dev

Is there something we could do to make it more clear that this is not 
the right place to ask for help?


-- 
Steven
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2017-12-26 Thread Franklin? Lee
On Tue, Dec 26, 2017 at 2:01 AM, Yogev Hendel  wrote:
>
> I don't know if this is the right place to put this,
> but I've found the following lines of code results in an incredibly long
> processing time.
> Perhaps it might be of use to someone.
>
> import re
> pat = re.compile('^/?(?:\\w+)/(?:[%\\w-]+/?)+/?$')
> pat.match('/t/a-aa-aa-a-aa-aa-aa-aa-aa-aa./')

(I think the correct place is python-list. python-dev is primarily for
the developers of Python itself. python-ideas is for proposing new
features and changes to the language. python-list is for general
discussion. Bug reports and feature requests belong in
https://bugs.python.org/ (where your post could also have gone).)

The textbook regular expression algorithm (which I believe grep uses)
runs in linear time with respect to the text length. The algorithm
used by Perl, Java, Python, JavaScript, Ruby, and many other languages
instead use a backtracking algorithm, which can run up to exponential
time with respect to text length. This worst-case is in fact necessary
(assuming P != NP): Perl allows (introduced?) backreferences, which
are NP-hard[1]. Perl also added some other features which complicate
things, but backreferences are enough.

The user-level solution is to understand how regexes are executed, and
to work around it.

Here are library-level solutions for your example:
1. Perl now has a regex optimizer, which will eliminate some
redundancies. Something similar can be added to Python, at first as a
third-party library.
2. In theory, we can use the textbook algorithm when possible, and the
backtracking algorithm when necessary. However, the textbook version
won't necessarily be faster, and may take more time to create, so
there's a tradeoff here.
3. To go even further, I believe it's possible to use the textbook
algorithm for subexpressions, while the overall expression uses
backtracking, internally iterating through the matches of the textbook
algorithm.

There's a series of articles by Russ Cox that try to get us back to
the textbook (see [2]). He and others implemented the ideas in the C++
library RE2[3], which has Python bindings[4]. RE2 was made for and
used on Google Code Search[5] (described in his articles), a (now
discontinued) search engine for open-source repos which allowed
regular expressions in the queries.

You can get a whiff of the limitations of the textbook algorithm by
checking out RE2's syntax[6] and seeing what features aren't
supported, though some features may be unsupported for different
reasons (such as being redundant syntax).
- Backreferences and lookaround assertions don't have a known solution.[7]
- Bounded repetition is only supported up to a limit (1000), because
each possible repetition needs its own set of states.
- Possessive quantifiers aren't supported. Greedy and reluctant quantifiers are.
- Groups and named groups _are_ supported. See the second and third
Russ Cox articles, with the term "submatch".[2]

(Apologies: I am making up reference syntax on-the-fly.)
[1] "Perl Regular Expression Matching is NP-Hard"
https://perl.plover.com/NPC/
[2] "Regular Expression Matching Can Be Simple And Fast"
https://swtch.com/~rsc/regexp/regexp1.html
"Regular Expression Matching: the Virtual Machine Approach"
https://swtch.com/~rsc/regexp/regexp2.html
"Regular Expression Matching in the Wild"
https://swtch.com/~rsc/regexp/regexp3.html
"Regular Expression Matching with a Trigram Index"
https://swtch.com/~rsc/regexp/regexp4.html
[3] RE2: https://github.com/google/re2
[4] pyre2: https://github.com/facebook/pyre2/
Also see re2 and re3 on PyPI, which intend to be a drop-in
replacement. re3 is a Py3-compatible fork of re2, which last updated
in 2015.
[5] https://en.wikipedia.org/wiki/Google_Code_Search
[6] https://github.com/google/re2/wiki/Syntax
[7] Quote: "As a matter of principle, RE2 does not support constructs
for which only backtracking solutions are known to exist. Thus,
backreferences and look-around assertions are not supported."
https://github.com/google/re2/wiki/WhyRE2
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2017-12-26 Thread MRAB

On 2017-12-26 07:01, Yogev Hendel wrote:


I don't know if this is the right place to put this,
but I've found the following lines of code results in an incredibly long 
processing time.

Perhaps it might be of use to someone.

/import re/
/pat = re.compile('^/?(?:\\w+)/(?:[%\\w-]+/?)+/?$')/
/pat.match('/t/a-aa-aa-a-aa-aa-aa-aa-aa-aa./')/

The pattern has a repeated repeat, which results in catastrophic 
backtracking.


As an example, think about how the pattern (?:a+)+b would try to match 
the string 'aaac'.


Match 'aaa', but not 'c'.

Match 'aa' and 'a', but not 'c'.

Match 'a' and 'aa', but not 'c'.

Match 'a' and 'a' and 'a', but not 'c'.

That's 4 failed attempts.

Now try match the string 'c'.

Match '', but not 'c'.

Match 'aaa' and 'a', but not 'c'.

Match 'aa' and 'aa', but not 'c'.

Match 'aa' and 'a a', but not 'c'.

Match 'a' and 'aaa', but not 'c'.

Match 'a' and 'aa' and 'a', but not 'c'.

Match 'a' and 'a aa', but not 'c'.

Match 'a' and 'a a' and 'a', but not 'c'.

That's 8 failed attempts.

Each additional 'a' in the string to match will double the number of 
attempts.


Your pattern has (?:[%\w-]+/?)+, and the '/' is optional. The string has 
a '.', which the pattern can't match, but it'll keep trying until it 
finally fails.


If you add just 1 more 'a' or '-' to the string, it'll take twice as 
long as it does now.


You need to think more carefully about how the pattern matches and what 
it'll do when it doesn't match.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2017-06-30 Thread Rob Boehne

Victor,

Thanks - I will comment on the issue WRT backporting the fix.
If you have particular issues you’d like me to look into, just point me in
the right direction.

Thanks,

Rob



On 6/30/17, 2:29 AM, "Victor Stinner"  wrote:

>Hi Robb,
>
>2017-06-29 23:34 GMT+02:00 Rob Boehne :
>> I¹m new to the list, and contributing to Python specifically, and I¹m
>> interested in getting master and 3.6 branches building and working
>> ³better² on UNIX.
>> I¹ve been looking at a problem building 3.6 on HP-UX and see a PR was
>> merged into master, https://github.com/python/cpython/pull/1351 and I¹d
>> like to see it applied to 3.6.  I¹m happy to create a PR with a
>> cherry-picked commit, and/or test.
>
>Sure, this change can be backported to 3.6, maybe also to 3.5. But
>hum, I may suggest to first focus on the master branch and fix most
>HP-UX issues, before spending time on backport. I would prefer to see
>most tests of the important modules pass on HP-UX (ex: test_os).
>
>For a backport, you can directly comment http://bugs.python.org/issue30183
>
>Victor

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-13 Thread Guido van Rossum
I am still in favor of this PEP but have run out of time to review it and
the feedback. I'm going on vacation for a week or so, maybe I'll find time,
if not I'll start reviewing this around Feb 23.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-11 Thread Ian Lee
I split off a separate thread on python-ideas [1] specific to the idea of
introducing + and += operators on a dict.

[1] https://mail.python.org/pipermail/python-ideas/2015-February/031748.html


~ Ian Lee

On Tue, Feb 10, 2015 at 10:35 PM, John Wong gokoproj...@gmail.com wrote:



 On Wed, Feb 11, 2015 at 12:35 AM, Ian Lee ianlee1...@gmail.com wrote:

 +1 for adding + or | operator for merging dicts. To me this operation:

  {'x': 1, 'y': 2} + {'z': 3}
 {'x': 1, 'y': 2, 'z': 3}

 Is very clear.  The only potentially non obvious case I can see then is
 when there are duplicate keys, in which case the syntax could just be
 defined that last setter wins, e.g.:

  {'x': 1, 'y': 2} + {'x': 3}
 {'x': 3, 'y': 2}

 Which is analogous to the example:

 new_dict = dict1.copy()
 new_dict.update(dict2)


 Well looking at just list
 a + b yields new list
 a += b yields modified a
 then there is also .extend in list. etc.

 so do we want to follow list's footstep? I like + because + is more
 natural to read. Maybe this needs to be a separate thread. I am actually
 amazed to remember dict + dict is not possible... there must be a reason
 (performance??) for this...

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-11 Thread Greg Ewing

John Wong wrote:


I am actually 
amazed to remember dict + dict is not possible... there must be a reason 
(performance??) for this...


I think it's mainly because there is no obviously
correct answer to the question of what to do about
duplicate keys.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-11 Thread Georg Brandl
On 02/10/2015 10:33 AM, Paul Moore wrote:
 On 10 February 2015 at 00:29, Neil Girdhar mistersh...@gmail.com wrote:
  function(**kw_arguments, **more_arguments)
 If the key key1 is in both dictionaries, more_arguments wins, right?


 There was some debate and it was decided that duplicate keyword arguments
 would remain an error (for now at least).  If you want to merge the
 dictionaries with overriding, then you can still do:

 function(**{**kw_arguments, **more_arguments})

 because **-unpacking in dicts overrides as you guessed.
 
 Eww. Seriously, function(**{**kw_arguments, **more_arguments}) feels
 more like a Perl executable line noise construct than anything I'd
 ever want to see in Python. And taking something that doesn't work and
 saying you can make it work by wrapping **{...} round it just seems
 wrong.

I don't think people would want to write the above.

I like the sequence and dict flattening part of the PEP, mostly because it
is consistent and should be easy to understand, but the comprehension syntax
enhancements seem to be bad for readability and comprehending what the code
does.

The call syntax part is a mixed bag: on the one hand it is nice to be
consistent with the extended possibilities in literals (flattening),
but on the other hand there would be small but annoying inconsistencies
anyways (e.g. the duplicate kwarg case above).

Georg

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-11 Thread John Wong
On Wed, Feb 11, 2015 at 12:35 AM, Ian Lee ianlee1...@gmail.com wrote:

 +1 for adding + or | operator for merging dicts. To me this operation:

  {'x': 1, 'y': 2} + {'z': 3}
 {'x': 1, 'y': 2, 'z': 3}

 Is very clear.  The only potentially non obvious case I can see then is
 when there are duplicate keys, in which case the syntax could just be
 defined that last setter wins, e.g.:

  {'x': 1, 'y': 2} + {'x': 3}
 {'x': 3, 'y': 2}

 Which is analogous to the example:

 new_dict = dict1.copy()
 new_dict.update(dict2)


 Well looking at just list
a + b yields new list
a += b yields modified a
then there is also .extend in list. etc.

so do we want to follow list's footstep? I like + because + is more natural
to read. Maybe this needs to be a separate thread. I am actually amazed to
remember dict + dict is not possible... there must be a reason
(performance??) for this...
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-11 Thread Greg Ewing

Georg Brandl wrote:

The call syntax part is a mixed bag: on the one hand it is nice to be
consistent with the extended possibilities in literals (flattening),
but on the other hand there would be small but annoying inconsistencies
anyways (e.g. the duplicate kwarg case above).


That inconsistency already exists -- duplicate keys are
allowed in dict literals but not calls:

 {'a':1, 'a':2}
{'a': 2}

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-11 Thread Antoine Pitrou
On Wed, 11 Feb 2015 18:45:40 +1300
Greg Ewing greg.ew...@canterbury.ac.nz wrote:
 Antoine Pitrou wrote:
 bytearray(ba) + bbc
  
  bytearray(b'abc')
  
 ba + bytearray(bbc)
  
  b'abc'
  
  It's quite convenient.
 
 It's a bit disconcerting that the left operand wins,
 rather than one of them being designated as the
 wider type, as occurs with many other operations on
 mixed types, e.g. int + float.

There is no wider type here. This behaviour is perfectly logical.

 In any case, these seem to be special-case combinations.

No:

 babc + array.array(b, bdef)
b'abcdef'
 bytearray(babc) + array.array(b, bdef)
bytearray(b'abcdef')

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Antoine Pitrou
On Mon, 09 Feb 2015 18:06:02 -0800
Ethan Furman et...@stoneleaf.us wrote:
 On 02/09/2015 05:14 PM, Victor Stinner wrote:
  
  def partial(func, *args, **keywords):
  def newfunc(*fargs, **fkeywords):
  return func(*(args + fargs), **keywords, **fkeywords)
  ...
  return newfunc
  
  The new code behaves differently since Neil said that an error is
  raised if fkeywords and keywords have keys in common. By the way, this
  must be written in the PEP.
 
 
 That line should read
 
 return func(*(args + fargs), **{**keywords, **fkeywords})
 
 to avoid the duplicate key error and keep the original functionality.

While losing readability. What's the point exactly?
One line over 112055 (as shown by Victor) can be collapsed away?
Wow, that's sure gonna change Python programming in a massively
beneficial way...

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Terry Reedy

On 2/9/2015 7:29 PM, Neil Girdhar wrote:

For some reason I can't seem to reply using Google groups, which is is
telling this is a read-only mirror (anyone know why?)


I presume spam prevention.  Most spam on python-list comes from the 
read-write GG mirror.


--
Terry Jan Reedy

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Victor Stinner
Le 10 févr. 2015 06:48, Greg Ewing greg.ew...@canterbury.ac.nz a écrit :
 It could potentially be a little more efficient by
 eliminating the construction of an intermediate list.

Is it the case in the implementation? If it has to create a temporary
list/tuple, I will prefer to not use it.

After long years of development, I chose to limit myself to one instruction
per line. It is for different reason:
- I spend more time to read code than to write code, readability matters
- it's easier to debug: most debugger work line by line, or at least it's a
convinient way to use them. If you use instruction per instruction, usually
you have to read assember/bytecode to get the current instruction
- profilers computes statistics per line,not per instruction (it's also the
case for tracemalloc)
- tracebacks only give the line number, not the column
- etc.

So I now prefer more verbise code even it is longer to write and may look
less efficient.

 Same again, multiple ** avoids construction of an
 itermediate dict.

Again, is it the case in the implementation. It may be possible to modify
CPython to really avoid a temporary dict (at least for some kind of Python
functions), but it would a large refactoring.

Usually, if an operation is not efficient, it's not implement, so users
don't try to use it and may even try to write their code differently (to
avoid the performance issue).

(But slow operations exist like list.remove.)

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Serhiy Storchaka

On 10.02.15 04:06, Ethan Furman wrote:

 return func(*(args + fargs), **{**keywords, **fkeywords})


We don't use [*args, *fargs] for concatenating lists, but args + fargs. 
Why not use + or | operators for merging dicts?



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Antoine Pitrou
On Tue, 10 Feb 2015 19:04:03 +1300
Greg Ewing greg.ew...@canterbury.ac.nz wrote:
 Donald Stufft wrote:
  
  perhaps a better 
  solution is to simply make it so that something like ``a_list + 
  an_iterable`` is valid and the iterable would just be consumed and +’d 
  onto the list.
 
 I don't think I like the asymmetry that this would
 introduce into + on lists. Currently
 
 [1, 2, 3] + (4, 5, 6)
 
 is an error because it's not clear whether the
 programmer intended the result to be a list or
 a tuple.

 bytearray(ba) + bbc
bytearray(b'abc')
 ba + bytearray(bbc)
b'abc'

It's quite convenient. In many contexts lists and tuples are quite
interchangeable (for example when unpacking).

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Paul Moore
On 10 February 2015 at 01:48, Donald Stufft don...@stufft.io wrote:
 I am really really -1 on the comprehension syntax.

[... omitting because gmail seems to have messed up the quoting ...]

 I don’t think * means “loop” anywhere else in Python and I would never
 “guess” that [*item for item in iterable] meant that. It’s completely non
 intuitive. Anywhere else you see *foo it’s unpacking a tuple not making an
 inner loop. That means that anywhere else in Python *item is the same thing
 as item[0], item[1], item[2], …, but this PEP makes it so just inside of a
 comprehension it actually means “make a second, inner loop” instead of what
 I think anyone who has learned that syntax would expect, which is it should
 be equivalent to [(item[0], item[1], item[2], …) for item in iterable].

I agree completely with Donald here. The comprehension syntax has
consistently been the part of the proposal that has resulted in
confused questions from reviewers, and I don't think it's at all
intuitive.

Is it allowable to vote on parts of the PEP separately? If not, then
the comprehension syntax is enough for me to reject the whole
proposal. If we can look at parts in isolation, I'm OK with saying -1
to the comprehension syntax and then we can look at whether the other
parts of the PEP add enough to be worth it (the comprehension side is
enough of a distraction that I haven't really considered the other
bits yet).

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Paul Moore
On 10 February 2015 at 00:29, Neil Girdhar mistersh...@gmail.com wrote:
  function(**kw_arguments, **more_arguments)
 If the key key1 is in both dictionaries, more_arguments wins, right?


 There was some debate and it was decided that duplicate keyword arguments
 would remain an error (for now at least).  If you want to merge the
 dictionaries with overriding, then you can still do:

 function(**{**kw_arguments, **more_arguments})

 because **-unpacking in dicts overrides as you guessed.

Eww. Seriously, function(**{**kw_arguments, **more_arguments}) feels
more like a Perl executable line noise construct than anything I'd
ever want to see in Python. And taking something that doesn't work and
saying you can make it work by wrapping **{...} round it just seems
wrong.

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Mark Lawrence

On 10/02/2015 13:23, Antoine Pitrou wrote:

On Tue, 10 Feb 2015 23:16:38 +1000
Nick Coghlan ncogh...@gmail.com wrote:

On 10 Feb 2015 19:24, Terry Reedy tjre...@udel.edu wrote:


On 2/9/2015 7:29 PM, Neil Girdhar wrote:


For some reason I can't seem to reply using Google groups, which is is
telling this is a read-only mirror (anyone know why?)



I presume spam prevention.  Most spam on python-list comes from the

read-write GG mirror.

There were also problems with Google Groups getting the reply-to headers
wrong (so if someone flipped the mirror to read-only: thank you!)

With any luck, we'll have a native web gateway later this year after
Mailman 3 is released, so posting through Google Groups will be less
desirable.


There is already a Web and NNTP gateway with Gmane:
http://news.gmane.org/gmane.comp.python.devel

No need to rely on Google's mediocre services.

Regards

Antoine.



Highly recommended as effectively zero spam.

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Ian Lee
+1 for adding + or | operator for merging dicts. To me this operation:

 {'x': 1, 'y': 2} + {'z': 3}
{'x': 1, 'y': 2, 'z': 3}

Is very clear.  The only potentially non obvious case I can see then is
when there are duplicate keys, in which case the syntax could just be
defined that last setter wins, e.g.:

 {'x': 1, 'y': 2} + {'x': 3}
{'x': 3, 'y': 2}

Which is analogous to the example:

new_dict = dict1.copy()
new_dict.update(dict2)


~ Ian Lee

On Tue, Feb 10, 2015 at 12:11 AM, Serhiy Storchaka storch...@gmail.com
wrote:

 On 10.02.15 04:06, Ethan Furman wrote:

  return func(*(args + fargs), **{**keywords, **fkeywords})


 We don't use [*args, *fargs] for concatenating lists, but args + fargs.
 Why not use + or | operators for merging dicts?



 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: https://mail.python.org/mailman/options/python-dev/
 ianlee1521%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Greg Ewing

Antoine Pitrou wrote:

bytearray(ba) + bbc


bytearray(b'abc')


ba + bytearray(bbc)


b'abc'

It's quite convenient.


It's a bit disconcerting that the left operand wins,
rather than one of them being designated as the
wider type, as occurs with many other operations on
mixed types, e.g. int + float.

In any case, these seem to be special-case combinations.
It's not so promiscuous as to accept any old iterable
on the right:

 ba + [1,2,3]
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: can't concat bytes to list
 [1,2,3] + ba
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: can only concatenate list (not bytes) to list

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Greg Ewing

Donald Stufft wrote:


1. The statement *item is roughly the same thing as (item[0], item[1], item[n])


No, it's not -- that would make it equivalent to tuple(item),
which is not what it means in any of its existing usages.

What it *is* roughly equivalent to is

   item[0], item[1], item[n]

i.e. *without* the parens, whatever that means in the
context concerned. In the context of a function call,
it has the effect of splicing the sequence in as if
you had written each item out as a separate expression.

You do have a valid objection insofar as this currently
has no meaning at all in a comprehension, i.e. this
is a syntax error:

   [item[0], item[1], item[n] for item in items]

So we would be giving a meaning to something that
doesn't currently have a meaning, rather than changing
an existing meaning, if you see what I mean.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Greg Ewing

Victor Stinner wrote:
Le 10 févr. 2015 06:48, Greg Ewing greg.ew...@canterbury.ac.nz 
mailto:greg.ew...@canterbury.ac.nz a écrit :

  It could potentially be a little more efficient by
  eliminating the construction of an intermediate list.

Is it the case in the implementation? If it has to create a temporary 
list/tuple, I will prefer to not use it.


The function call machinery will create a new tuple for
the positional args in any case. But if you manually
combine your * args into a tuple before calling, there
are *two* tuple allocations being done. Passing all the
* args directly into the call would allow one of them
to be avoided.

Similarly for dicts and ** args.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Nick Coghlan
On 10 Feb 2015 19:24, Terry Reedy tjre...@udel.edu wrote:

 On 2/9/2015 7:29 PM, Neil Girdhar wrote:

 For some reason I can't seem to reply using Google groups, which is is
 telling this is a read-only mirror (anyone know why?)


 I presume spam prevention.  Most spam on python-list comes from the
read-write GG mirror.

There were also problems with Google Groups getting the reply-to headers
wrong (so if someone flipped the mirror to read-only: thank you!)

With any luck, we'll have a native web gateway later this year after
Mailman 3 is released, so posting through Google Groups will be less
desirable.

Cheers,
Nick.


 --
 Terry Jan Reedy


 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Eli Bendersky
On Tue, Feb 10, 2015 at 1:33 AM, Paul Moore p.f.mo...@gmail.com wrote:

 On 10 February 2015 at 00:29, Neil Girdhar mistersh...@gmail.com wrote:
   function(**kw_arguments, **more_arguments)
  If the key key1 is in both dictionaries, more_arguments wins, right?
 
 
  There was some debate and it was decided that duplicate keyword arguments
  would remain an error (for now at least).  If you want to merge the
  dictionaries with overriding, then you can still do:
 
  function(**{**kw_arguments, **more_arguments})
 
  because **-unpacking in dicts overrides as you guessed.

 Eww. Seriously, function(**{**kw_arguments, **more_arguments}) feels
 more like a Perl executable line noise construct than anything I'd
 ever want to see in Python. And taking something that doesn't work and
 saying you can make it work by wrapping **{...} round it just seems
 wrong.


+1 to this and similar reasoning

I find the syntax proposed in PEP 448 incredibly obtuse, and I don't think
it's worth it. Python has never placed terseness of expression as its
primary goal, but this is mainly what the PEP is aiming at. -1 on the PEP
for me, at least in its current form.

Eli
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Antoine Pitrou
On Tue, 10 Feb 2015 23:16:38 +1000
Nick Coghlan ncogh...@gmail.com wrote:
 On 10 Feb 2015 19:24, Terry Reedy tjre...@udel.edu wrote:
 
  On 2/9/2015 7:29 PM, Neil Girdhar wrote:
 
  For some reason I can't seem to reply using Google groups, which is is
  telling this is a read-only mirror (anyone know why?)
 
 
  I presume spam prevention.  Most spam on python-list comes from the
 read-write GG mirror.
 
 There were also problems with Google Groups getting the reply-to headers
 wrong (so if someone flipped the mirror to read-only: thank you!)
 
 With any luck, we'll have a native web gateway later this year after
 Mailman 3 is released, so posting through Google Groups will be less
 desirable.

There is already a Web and NNTP gateway with Gmane:
http://news.gmane.org/gmane.comp.python.devel

No need to rely on Google's mediocre services.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-10 Thread Nick Coghlan
On 10 Feb 2015 19:41, Paul Moore p.f.mo...@gmail.com wrote:

 I agree completely with Donald here. The comprehension syntax has
 consistently been the part of the proposal that has resulted in
 confused questions from reviewers, and I don't think it's at all
 intuitive.

 Is it allowable to vote on parts of the PEP separately? If not, then
 the comprehension syntax is enough for me to reject the whole
 proposal. If we can look at parts in isolation, I'm OK with saying -1
 to the comprehension syntax and then we can look at whether the other
 parts of the PEP add enough to be worth it (the comprehension side is
 enough of a distraction that I haven't really considered the other
 bits yet).

It occurs to me that the PEP effectively changes the core of a generator
expression from yield x to yield from x if the tuple expansion syntax
is used. If we rejected the yield *x syntax for standalone yield
expressions, I don't think it makes sense to now add it for generator
expressions.

So I guess that adds me to the -1 camp on the comprehension/generator
expression part of the story - it doesn't make things all that much easier
to write than the relevant nested loop, and it makes them notably harder to
read.

I haven't formed an opinion on the rest of the PEP yet, as it's been a
while since I read the full text. I'll read through the latest version
tomorrow.

Regards,
Nick.


 Paul
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Donald Stufft

 On Feb 9, 2015, at 8:34 PM, Neil Girdhar mistersh...@gmail.com wrote:
 
 
 
 On Mon, Feb 9, 2015 at 7:53 PM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 
 On Feb 9, 2015, at 7:29 PM, Neil Girdhar mistersh...@gmail.com 
 mailto:mistersh...@gmail.com wrote:
 
 For some reason I can't seem to reply using Google groups, which is is 
 telling this is a read-only mirror (anyone know why?)  Anyway, I'm going 
 to answer as best I can the concerns.
 
 Antoine said:
 
 To be clear, the PEP will probably be useful for one single line of 
 Python code every 1. This is a very weak case for such an intrusive 
 syntax addition. I would support the PEP if it only added the simple 
 cases of tuple unpacking, left alone function call conventions, and 
 didn't introduce **-unpacking. 
 
 To me this is more of a syntax simplification than a syntax addition.  For 
 me the **-unpacking is the most useful part. Regarding utility, it seems 
 that a many of the people on r/python were pretty excited about this PEP: 
 http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/
  
 http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/
 
 —
 
 Victor noticed that there's a mistake with the code:
 
  ranges = [range(i) for i in range(5)] 
  [*item for item in ranges] 
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3] 
 
 It should be a range(4) in the code.  The * applies to only item.  It is 
 the same as writing:
 
 [*range(0), *range(1), *range(2), *range(3), *range(4)]
 
 which is the same as unpacking all of those ranges into a list.
 
  function(**kw_arguments, **more_arguments) 
 If the key key1 is in both dictionaries, more_arguments wins, right? 
 
 There was some debate and it was decided that duplicate keyword arguments 
 would remain an error (for now at least).  If you want to merge the 
 dictionaries with overriding, then you can still do:
 
 function(**{**kw_arguments, **more_arguments})
 
 because **-unpacking in dicts overrides as you guessed.
 
 —
 
 
 
 On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 
 On Feb 9, 2015, at 4:06 PM, Neil Girdhar mistersh...@gmail.com 
 mailto:mistersh...@gmail.com wrote:
 
 Hello all,
 
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/ 
 https://www.python.org/dev/peps/pep-0448/) is implemented now based on 
 some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and 
 recently completed by Joshua Landau and me.
 
 The issue tracker http://bugs.python.org/issue2292 
 http://bugs.python.org/issue2292  has  a working patch.  Would someone be 
 able to review it?
 
 
 I just skimmed over the PEP and it seems like it’s trying to solve a few 
 different things:
 
 * Making it easy to combine multiple lists and additional positional args 
 into a function call
 * Making it easy to combine multiple dicts and additional keyword args into 
 a functional call
 * Making it easy to do a single level of nested iterable flatten.
 
 I would say it's:
 * making it easy to unpack iterables and mappings in function calls
 * making it easy to unpack iterables  into list and set displays and 
 comprehensions, and
 * making it easy to unpack mappings into dict displays and comprehensions.
 
  
 
 Looking at the syntax in the PEP I had a hard time detangling what exactly 
 it was doing even with reading the PEP itself. I wonder if there isn’t a way 
 to combine simpler more generic things to get the same outcome.
 
 Looking at the Making it easy to combine multiple lists and additional 
 positional args into a  function call aspect of this, why is:
 
 print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
 
 That's already doable in Python right now and doesn't require anything new 
 to handle it.
 
 Admittedly, this wasn't a great example.  But, if [1] and [2] had been 
 iterables, you would have to cast each to list, e.g.,
 
 accumulator = []
 accumulator.extend(a) 
 accumulator.append(b)
 accumulator.extend(c)
 print(*accumulator)
 
 replaces
 
 print(*a, b, *c)
 
 where a and c are iterable.  The latter version is also more efficient 
 because it unpacks only a onto the stack allocating no auxilliary list.
 
 Honestly that doesn’t seem like the way I’d write it at all, if they might 
 not be lists I’d just cast them to lists:
 
 print(*list(a) + [b] + list(c))
 
 Sure, that works too as long as you put in the missing parentheses.

There are no missing parentheses, the * and ** is last in the order of 
operations (though the parens would likely make that more clear).

  
 
 But if casting to list really is that big a deal, then perhaps a better 
 solution is to simply make it so that something like ``a_list + an_iterable`` 
 is valid and the iterable would just be consumed and +’d onto the list. That 
 still feels like a more general solution and a far less surprising and easier 
 to read one.
 
 I understand.  However I just want to 

Re: [Python-Dev] (no subject)

2015-02-09 Thread Greg Ewing

Donald Stufft wrote:


perhaps a better 
solution is to simply make it so that something like ``a_list + 
an_iterable`` is valid and the iterable would just be consumed and +’d 
onto the list.


I don't think I like the asymmetry that this would
introduce into + on lists. Currently

   [1, 2, 3] + (4, 5, 6)

is an error because it's not clear whether the
programmer intended the result to be a list or
a tuple. I think that's a good thing.

Also, it would mean that

   [1, 2, 3] + foo == [1, 2, 3, f, o, o]

which would be surprising and probably not what
was intended.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Tue, Feb 10, 2015 at 1:31 AM, Donald Stufft don...@stufft.io wrote:


  On Feb 10, 2015, at 12:55 AM, Greg Ewing greg.ew...@canterbury.ac.nz
 wrote:
 
  Donald Stufft wrote:
  However [*item for item in ranges] is mapped more to something like
 this:
  result = []
  for item in iterable:
 result.extend(*item)
 
  Actually it would be
 
result.extend(item)
 
  But if that bothers you, you could consider the expansion
  to be
 
  result = []
  for item in iterable:
for item1 in item:
   result.append(item)
 
  In other words, the * is shorthand for an extra level
  of looping.
 
  and it acts differently than if you just did *item outside of a list
 comprehension.
 
  Not sure what you mean by that. It seems closely
  analogous to the use of * in a function call to
  me.
 

 Putting aside the proposed syntax the current two statements are currently
 true:

 1. The statement *item is roughly the same thing as (item[0], item[1],
 item[n])
 2. The statement [something for thing in iterable] is roughly the same as:
result = []
for thing in iterable:
result.append(something)
This is a single loop where an expression is ran for each iteration of
 the
loop, and the return result of that expression is appended to the
 result.

 If you combine these two things, the something in #2 becuase *item, and
 since
 *item is roughly the same thing as (item[0], item[1], item[n]) what you end
 up with is something that *should* behave like:

 result = []
 for thing in iterable:
 result.append((thing[0], thing[1], thing[n]))


That is what [list(something) for thing in iterable] does.

The iterable unpacking rule might have been better explained as follows:


On the left of assignment * is packing, e.g.
a, b, *cs = iterable

On the right of an assignment, * is an unpacking, e.g.

xs = a, b, *cs


In either case, the  elements of cs are treated the same as a and b.

Do you agree that [*x for x in [as, bs, cs]] === [*as, *bs, *cs] ?

Then the elements of *as are unpacked into the list, the same way that
those elements are currently unpacked in a regular function call

f(*as) === f(as[0], as[1], ...)

Similarly,

[*as, *bs, *cs] === [as[0], as[1], …, bs[0], bs[1], …, cs[0], cs[1], …]

The rule for function calls is analogous:


In a function definition, * is a packing, collecting extra positional
argument in a list.  E.g.,

def f(*args):

In a function call, * is an unpacking, expanding an iterable to populate
positional arguments.  E.g.,

f(*args)

—

PEP 448 proposes having arbitrary numbers of unpackings in arbitrary
positions.

I will be updating the PEP this week if I can find the time.



 Or to put it another way:

  [*item for item in [[1, 2, 3], [4, 5, 6]]
 [(1, 2, 3), (4, 5, 6)]


 Is a lot more consistent with what *thing and list comprehensions already
 mean
 in Python than for the answer to be [1, 2, 3, 4, 5, 6].
 ---
 Donald Stufft
 PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Greg Ewing

Donald Stufft wrote:

However [*item for item in ranges] is mapped more to something like this:

result = []
for item in iterable:
result.extend(*item)


Actually it would be

   result.extend(item)

But if that bothers you, you could consider the expansion
to be

result = []
for item in iterable:
   for item1 in item:
  result.append(item)

In other words, the * is shorthand for an extra level
of looping.

and it acts differently than if you 
just did *item outside of a list comprehension.


Not sure what you mean by that. It seems closely
analogous to the use of * in a function call to
me.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Victor Stinner
Le 10 févr. 2015 03:07, Ethan Furman et...@stoneleaf.us a écrit :
 That line should read

 return func(*(args + fargs), **{**keywords, **fkeywords})

 to avoid the duplicate key error and keep the original functionality.

To me, this is just ugly. It prefers the original code which use .update().

Maybe the PEP should be changed to behave as .update()?

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Ethan Furman
On 02/09/2015 05:14 PM, Victor Stinner wrote:
 
 def partial(func, *args, **keywords):
 def newfunc(*fargs, **fkeywords):
 return func(*(args + fargs), **keywords, **fkeywords)
 ...
 return newfunc
 
 The new code behaves differently since Neil said that an error is
 raised if fkeywords and keywords have keys in common. By the way, this
 must be written in the PEP.


That line should read

return func(*(args + fargs), **{**keywords, **fkeywords})

to avoid the duplicate key error and keep the original functionality.

--
~Ethan~



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Ethan Furman
On 02/09/2015 05:48 PM, Donald Stufft wrote:
 
 I don’t think * means “loop” anywhere else in Python and I would never 
 “guess” that
 
  [*item for item in iterable]
 
 meant that. It’s completely non intuitive. Anywhere else you see *foo it’s 
 unpacking a tuple not making an inner loop. That
 means that anywhere else in Python *item is the same thing as item[0], 
 item[1], item[2], …, but this PEP makes it so
 just inside of a comprehension it actually means “make a second, inner loop” 
 instead of what I think anyone who has
 learned that syntax would expect, which is it should be equivalent to 
 [(item[0], item[1], item[2], …) for item in iterable].

I agree with Donald.  I would expect a list of lists from that syntax... or 
maybe a list of tuples?

--
~Ethan~



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Donald Stufft

 On Feb 10, 2015, at 12:55 AM, Greg Ewing greg.ew...@canterbury.ac.nz wrote:
 
 Donald Stufft wrote:
 However [*item for item in ranges] is mapped more to something like this:
 result = []
 for item in iterable:
result.extend(*item)
 
 Actually it would be
 
   result.extend(item)
 
 But if that bothers you, you could consider the expansion
 to be
 
 result = []
 for item in iterable:
   for item1 in item:
  result.append(item)
 
 In other words, the * is shorthand for an extra level
 of looping.
 
 and it acts differently than if you just did *item outside of a list 
 comprehension.
 
 Not sure what you mean by that. It seems closely
 analogous to the use of * in a function call to
 me.
 

Putting aside the proposed syntax the current two statements are currently
true:

1. The statement *item is roughly the same thing as (item[0], item[1], item[n])
2. The statement [something for thing in iterable] is roughly the same as:
   result = []
   for thing in iterable:
   result.append(something)
   This is a single loop where an expression is ran for each iteration of the
   loop, and the return result of that expression is appended to the result.

If you combine these two things, the something in #2 becuase *item, and since
*item is roughly the same thing as (item[0], item[1], item[n]) what you end
up with is something that *should* behave like:

result = []
for thing in iterable:
result.append((thing[0], thing[1], thing[n]))


Or to put it another way:

 [*item for item in [[1, 2, 3], [4, 5, 6]]
[(1, 2, 3), (4, 5, 6)]


Is a lot more consistent with what *thing and list comprehensions already mean
in Python than for the answer to be [1, 2, 3, 4, 5, 6].
---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Tue, Feb 10, 2015 at 2:20 AM, Victor Stinner victor.stin...@gmail.com
wrote:

 To be logic, I expect [(*item) for item in mylist] to simply return mylist.


If you want simply mylist as a list, that is [*mylist]

 [*(item) for item in mylist] with mylist=[(1, 2), (3,)] could return [1,
 2, 3],

right

 as just [*mylist], so unpack mylist.


[*mylist] remains equivalent list(mylist), just as it is now.   In one
case, you're unpacking the elements of the list, in the other you're
unpacking the list itself.

Victor

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Tue, Feb 10, 2015 at 2:08 AM, Victor Stinner victor.stin...@gmail.com
wrote:


 Le 10 févr. 2015 03:07, Ethan Furman et...@stoneleaf.us a écrit :
  That line should read
 
  return func(*(args + fargs), **{**keywords, **fkeywords})
 
  to avoid the duplicate key error and keep the original functionality.

 To me, this is just ugly. It prefers the original code which use .update().

 Maybe the PEP should be changed to behave as .update()?

 Victor


Just for clarity, Ethan is right, but it could also be written:

return func(*args, *fargs, **{**keywords, **fkeywords})

Best,

Neil


 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Victor Stinner
To be logic, I expect [(*item) for item in mylist] to simply return mylist.

[*(item for item in mylist] with mylist=[(1, 2), (3,)] could return [1, 2,
3], as just [*mylist], so unpack mylist.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
ah, sorry… forget that I said just as it is now — I am losing track of
what's allowed in Python now!

On Tue, Feb 10, 2015 at 2:29 AM, Neil Girdhar mistersh...@gmail.com wrote:



 On Tue, Feb 10, 2015 at 2:20 AM, Victor Stinner victor.stin...@gmail.com
 wrote:

 To be logic, I expect [(*item) for item in mylist] to simply return
 mylist.


 If you want simply mylist as a list, that is [*mylist]

 [*(item) for item in mylist] with mylist=[(1, 2), (3,)] could return [1,
 2, 3],

 right

 as just [*mylist], so unpack mylist.


 [*mylist] remains equivalent list(mylist), just as it is now.   In one
 case, you're unpacking the elements of the list, in the other you're
 unpacking the list itself.

 Victor

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
For some reason I can't seem to reply using Google groups, which is is
telling this is a read-only mirror (anyone know why?)  Anyway, I'm going
to answer as best I can the concerns.

Antoine said:

To be clear, the PEP will probably be useful for one single line of
 Python code every 1. This is a very weak case for such an intrusive
 syntax addition. I would support the PEP if it only added the simple
 cases of tuple unpacking, left alone function call conventions, and
 didn't introduce **-unpacking.


To me this is more of a syntax simplification than a syntax addition.  For
me the **-unpacking is the most useful part. Regarding utility, it seems
that a many of the people on r/python were pretty excited about this PEP:
http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/

—

Victor noticed that there's a mistake with the code:

 ranges = [range(i) for i in range(5)]
  [*item for item in ranges]
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]


It should be a range(4) in the code.  The * applies to only item.  It is
the same as writing:

[*range(0), *range(1), *range(2), *range(3), *range(4)]

which is the same as unpacking all of those ranges into a list.

 function(**kw_arguments, **more_arguments)
 If the key key1 is in both dictionaries, more_arguments wins, right?


There was some debate and it was decided that duplicate keyword arguments
would remain an error (for now at least).  If you want to merge the
dictionaries with overriding, then you can still do:

function(**{**kw_arguments, **more_arguments})

because **-unpacking in dicts overrides as you guessed.

—



On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft don...@stufft.io wrote:


 On Feb 9, 2015, at 4:06 PM, Neil Girdhar mistersh...@gmail.com wrote:

 Hello all,

 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.

 The issue tracker http://bugs.python.org/issue2292  has  a working
 patch.  Would someone be able to review it?


 I just skimmed over the PEP and it seems like it’s trying to solve a few
 different things:

 * Making it easy to combine multiple lists and additional positional args
 into a function call
 * Making it easy to combine multiple dicts and additional keyword args
 into a functional call
 * Making it easy to do a single level of nested iterable flatten.


I would say it's:
* making it easy to unpack iterables and mappings in function calls
* making it easy to unpack iterables  into list and set displays and
comprehensions, and
* making it easy to unpack mappings into dict displays and comprehensions.




 Looking at the syntax in the PEP I had a hard time detangling what exactly
 it was doing even with reading the PEP itself. I wonder if there isn’t a
 way to combine simpler more generic things to get the same outcome.

 Looking at the Making it easy to combine multiple lists and additional
 positional args into a  function call aspect of this, why is:

 print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?

 That's already doable in Python right now and doesn't require anything new
 to handle it.


Admittedly, this wasn't a great example.  But, if [1] and [2] had been
iterables, you would have to cast each to list, e.g.,

accumulator = []
accumulator.extend(a)
accumulator.append(b)
accumulator.extend(c)
print(*accumulator)

replaces

print(*a, b, *c)

where a and c are iterable.  The latter version is also more efficient
because it unpacks only a onto the stack allocating no auxilliary list.


 Looking at the making it easy to do a single level of nsted iterable
 'flatten' aspect of this, the example of:

  ranges = [range(i) for i in range(5)]
  [*item for item in ranges]
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]

 Conceptually a list comprehension like [thing for item in iterable] can be
 mapped to a for loop like this:

 result = []
 for item in iterable:
 result.append(thing)

 However [*item for item in ranges] is mapped more to something like this:

 result = []
 for item in iterable:
 result.extend(*item)

 I feel like switching list comprehensions from append to extend just
 because of a * is really confusing and it acts differently than if you just
 did *item outside of a list comprehension. I feel like the
 itertools.chain() way of doing this is *much* clearer.

 Finally there's the make it easy to combine multiple dicts into a
 function call aspect of this. This I think is the biggest thing that this
 PEP actually adds, however I think it goes around it the wrong way. Sadly
 there is nothing like [1] + [2] for dictionaries. The closest thing is:

 kwargs = dict1.copy()
 kwargs.update(dict2)
 func(**kwargs)

 So what I wonder is if this PEP wouldn't be better off just using the
 existing methods for doing the kinds of things that I pointed out above,
 and instead defining + or | or some other symbol for something similar 

Re: [Python-Dev] (no subject)

2015-02-09 Thread Larry Hastings



What's an example of a way inspect.signature must change?  I thought PEP 
448 added new unpacking shortcuts which (for example) change the 
*caller* side of a function call.  I didn't realize it impacted the 
*callee* side too.



//arry/

On 02/09/2015 03:14 PM, Antoine Pitrou wrote:

On Tue, 10 Feb 2015 08:43:53 +1000
Nick Coghlan ncogh...@gmail.com wrote:

For example, the potential for arcane call arguments suggests the need for
a PEP 8 addition saying first standalone args, then iterable expansions,
then mapping expansions, even though syntactically any order would now be
permitted at call time.

There are other concerns:

- inspect.signature() must be updated to cover the new call
   possibilities

- function call performance must not be crippled by the new
   possibilities

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/larry%40hastings.org


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Yes, that's exactly right.  It does not affect the callee.

Regarding function call performance, nothing has changed for the originally
accepted argument lists: the opcodes generated are the same and they are
processed in the same way.

Also, regarding calling argument order, not any order is allowed.  Regular
arguments must precede other kinds of arguments.  Keyword arguments must
precede **-args.  *-args must precede **-args.   However, I agree with
Antoine that PEP 8 should be updated to suggest that *-args should precede
any keyword arguments.  It is currently allowed to write f(x=2, *args),
which is equivalent to f(*args, x=2).

Best,

Neil

On Mon, Feb 9, 2015 at 7:30 PM, Larry Hastings la...@hastings.org wrote:



 What's an example of a way inspect.signature must change?  I thought PEP
 448 added new unpacking shortcuts which (for example) change the *caller*
 side of a function call.  I didn't realize it impacted the *callee* side
 too.


 */arry*

 On 02/09/2015 03:14 PM, Antoine Pitrou wrote:

 On Tue, 10 Feb 2015 08:43:53 +1000
 Nick Coghlan ncogh...@gmail.com ncogh...@gmail.com wrote:

  For example, the potential for arcane call arguments suggests the need for
 a PEP 8 addition saying first standalone args, then iterable expansions,
 then mapping expansions, even though syntactically any order would now be
 permitted at call time.

  There are other concerns:

 - inspect.signature() must be updated to cover the new call
   possibilities

 - function call performance must not be crippled by the new
   possibilities

 Regards

 Antoine.


 ___
 Python-Dev mailing 
 listPython-Dev@python.orghttps://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 https://mail.python.org/mailman/options/python-dev/larry%40hastings.org



 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Donald Stufft

 On Feb 9, 2015, at 7:29 PM, Neil Girdhar mistersh...@gmail.com wrote:
 
 For some reason I can't seem to reply using Google groups, which is is 
 telling this is a read-only mirror (anyone know why?)  Anyway, I'm going to 
 answer as best I can the concerns.
 
 Antoine said:
 
 To be clear, the PEP will probably be useful for one single line of 
 Python code every 1. This is a very weak case for such an intrusive 
 syntax addition. I would support the PEP if it only added the simple 
 cases of tuple unpacking, left alone function call conventions, and 
 didn't introduce **-unpacking. 
 
 To me this is more of a syntax simplification than a syntax addition.  For me 
 the **-unpacking is the most useful part. Regarding utility, it seems that a 
 many of the people on r/python were pretty excited about this PEP: 
 http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/
  
 http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/
 
 —
 
 Victor noticed that there's a mistake with the code:
 
  ranges = [range(i) for i in range(5)] 
  [*item for item in ranges] 
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3] 
 
 It should be a range(4) in the code.  The * applies to only item.  It is 
 the same as writing:
 
 [*range(0), *range(1), *range(2), *range(3), *range(4)]
 
 which is the same as unpacking all of those ranges into a list.
 
  function(**kw_arguments, **more_arguments) 
 If the key key1 is in both dictionaries, more_arguments wins, right? 
 
 There was some debate and it was decided that duplicate keyword arguments 
 would remain an error (for now at least).  If you want to merge the 
 dictionaries with overriding, then you can still do:
 
 function(**{**kw_arguments, **more_arguments})
 
 because **-unpacking in dicts overrides as you guessed.
 
 —
 
 
 
 On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft don...@stufft.io 
 mailto:don...@stufft.io wrote:
 
 On Feb 9, 2015, at 4:06 PM, Neil Girdhar mistersh...@gmail.com 
 mailto:mistersh...@gmail.com wrote:
 
 Hello all,
 
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/ 
 https://www.python.org/dev/peps/pep-0448/) is implemented now based on 
 some early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and 
 recently completed by Joshua Landau and me.
 
 The issue tracker http://bugs.python.org/issue2292 
 http://bugs.python.org/issue2292  has  a working patch.  Would someone be 
 able to review it?
 
 
 I just skimmed over the PEP and it seems like it’s trying to solve a few 
 different things:
 
 * Making it easy to combine multiple lists and additional positional args 
 into a function call
 * Making it easy to combine multiple dicts and additional keyword args into a 
 functional call
 * Making it easy to do a single level of nested iterable flatten.
 
 I would say it's:
 * making it easy to unpack iterables and mappings in function calls
 * making it easy to unpack iterables  into list and set displays and 
 comprehensions, and
 * making it easy to unpack mappings into dict displays and comprehensions.
 
  
 
 Looking at the syntax in the PEP I had a hard time detangling what exactly it 
 was doing even with reading the PEP itself. I wonder if there isn’t a way to 
 combine simpler more generic things to get the same outcome.
 
 Looking at the Making it easy to combine multiple lists and additional 
 positional args into a  function call aspect of this, why is:
 
 print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?
 
 That's already doable in Python right now and doesn't require anything new to 
 handle it.
 
 Admittedly, this wasn't a great example.  But, if [1] and [2] had been 
 iterables, you would have to cast each to list, e.g.,
 
 accumulator = []
 accumulator.extend(a) 
 accumulator.append(b)
 accumulator.extend(c)
 print(*accumulator)
 
 replaces
 
 print(*a, b, *c)
 
 where a and c are iterable.  The latter version is also more efficient 
 because it unpacks only a onto the stack allocating no auxilliary list.

Honestly that doesn’t seem like the way I’d write it at all, if they might not 
be lists I’d just cast them to lists:

print(*list(a) + [b] + list(c))

But if casting to list really is that big a deal, then perhaps a better 
solution is to simply make it so that something like ``a_list + an_iterable`` 
is valid and the iterable would just be consumed and +’d onto the list. That 
still feels like a more general solution and a far less surprising and easier 
to read one.


 
 
 Looking at the making it easy to do a single level of nsted iterable 
 'flatten' aspect of this, the example of:
 
  ranges = [range(i) for i in range(5)]
  [*item for item in ranges]
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]
 
 Conceptually a list comprehension like [thing for item in iterable] can be 
 mapped to a for loop like this:
 
 result = []
 for item in iterable:
 result.append(thing)
 
 However [*item for item in ranges] is mapped more to something like 

Re: [Python-Dev] (no subject)

2015-02-09 Thread Barry Warsaw
On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:

Also, regarding calling argument order, not any order is allowed.  Regular
arguments must precede other kinds of arguments.  Keyword arguments must
precede **-args.  *-args must precede **-args.   However, I agree with
Antoine that PEP 8 should be updated to suggest that *-args should precede
any keyword arguments.  It is currently allowed to write f(x=2, *args),
which is equivalent to f(*args, x=2).

But if we have to add a PEP 8 admonition against some syntax that's being
newly added, why is this an improvement?

I had some more snarky/funny comments to make, but I'll just say -1.  The
Rationale in the PEP doesn't sell me on it being an improvement to Python.

Cheers,
-Barry
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
The admonition is against syntax that currently exists.

On Mon, Feb 9, 2015 at 7:53 PM, Barry Warsaw ba...@python.org wrote:

 On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:

 Also, regarding calling argument order, not any order is allowed.  Regular
 arguments must precede other kinds of arguments.  Keyword arguments must
 precede **-args.  *-args must precede **-args.   However, I agree with
 Antoine that PEP 8 should be updated to suggest that *-args should precede
 any keyword arguments.  It is currently allowed to write f(x=2, *args),
 which is equivalent to f(*args, x=2).

 But if we have to add a PEP 8 admonition against some syntax that's being
 newly added, why is this an improvement?

 I had some more snarky/funny comments to make, but I'll just say -1.  The
 Rationale in the PEP doesn't sell me on it being an improvement to Python.

 Cheers,
 -Barry
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Just an FYI:
http://www.reddit.com/r/Python/comments/2v8g26/python_350_alpha_1_has_been_released/

448 was mentioned here (by Python lay people — not developers).

On Mon, Feb 9, 2015 at 7:56 PM, Neil Girdhar mistersh...@gmail.com wrote:

 The admonition is against syntax that currently exists.

 On Mon, Feb 9, 2015 at 7:53 PM, Barry Warsaw ba...@python.org wrote:

 On Feb 09, 2015, at 07:46 PM, Neil Girdhar wrote:

 Also, regarding calling argument order, not any order is allowed.
 Regular
 arguments must precede other kinds of arguments.  Keyword arguments must
 precede **-args.  *-args must precede **-args.   However, I agree with
 Antoine that PEP 8 should be updated to suggest that *-args should
 precede
 any keyword arguments.  It is currently allowed to write f(x=2, *args),
 which is equivalent to f(*args, x=2).

 But if we have to add a PEP 8 admonition against some syntax that's being
 newly added, why is this an improvement?

 I had some more snarky/funny comments to make, but I'll just say -1.  The
 Rationale in the PEP doesn't sell me on it being an improvement to Python.

 Cheers,
 -Barry
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/mistersheik%40gmail.com



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Victor Stinner
2015-02-10 1:29 GMT+01:00 Neil Girdhar mistersh...@gmail.com:
 For some reason I can't seem to reply using Google groups, which is is
 telling this is a read-only mirror (anyone know why?)  Anyway, I'm going
 to answer as best I can the concerns.

 Antoine said:

 To be clear, the PEP will probably be useful for one single line of
 Python code every 1. This is a very weak case for such an intrusive
 syntax addition. I would support the PEP if it only added the simple
 cases of tuple unpacking, left alone function call conventions, and
 didn't introduce **-unpacking.


 To me this is more of a syntax simplification than a syntax addition.  For
 me the **-unpacking is the most useful part. Regarding utility, it seems
 that a many of the people on r/python were pretty excited about this PEP:
 http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/

I used grep to find how many times dict.update() is used.

haypo@selma$ wc -l Lib/*.py
()
 112055 total
haypo@selma$ grep '\.update(.\+)' Lib/*.py|wc -l
63

So there are 63 or less (it's a regex, I didn't check each line) calls
to dict.update() on a total of 112,055 lines.

I found a few numbers of codes using the pattern:
dict1.update(dict2); func(**dict1). Examples:


functools.py:

def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
newkeywords = keywords.copy()
newkeywords.update(fkeywords)
return func(*(args + fargs), **newkeywords)
...
return newfunc

=

def partial(func, *args, **keywords):
def newfunc(*fargs, **fkeywords):
return func(*(args + fargs), **keywords, **fkeywords)
...
return newfunc

The new code behaves differently since Neil said that an error is
raised if fkeywords and keywords have keys in common. By the way, this
must be written in the PEP.


pdb.py:

ns = self.curframe.f_globals.copy()
ns.update(self.curframe_locals)
code.interact(*interactive*, local=ns)

Hum no sorry, ns is not used with ** here.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Benjamin Peterson


On Mon, Feb 9, 2015, at 16:34, Ethan Furman wrote:
 On 02/09/2015 01:28 PM, Benjamin Peterson wrote:
  On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
 
  The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
  implemented now based on some early work by Thomas Wouters (in 2008) and
  Florian Hahn (2013) and recently completed by Joshua Landau and me.
 
  The issue tracker http://bugs.python.org/issue2292  has  a working patch.
  Would someone be able to review it?
  
  The PEP is not even accepted.
 
 I believe somebody (Guido?) commented Why worry about accepting the PEP
 when there's no working patch?  -- or
 something to that effect.

On the other hand, I'd rather not do detailed reviews of patches that
won't be accepted. :)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Guido van Rossum
FWIW, I've encouraged Neil and others to complete this code as a
prerequisite for a code review (but I can't review it myself). I am mildly
in favor of the PEP -- if the code works and looks maintainable I would
accept it. (A few things got changed in the PEP as a result of the work.)

On Mon, Feb 9, 2015 at 1:28 PM, Benjamin Peterson benja...@python.org
wrote:



 On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
  Hello all,
 
  The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
  implemented now based on some early work by Thomas Wouters (in 2008) and
  Florian Hahn (2013) and recently completed by Joshua Landau and me.
 
  The issue tracker http://bugs.python.org/issue2292  has  a working
 patch.
  Would someone be able to review it?

 The PEP is not even accepted.
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/guido%40python.org




-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Ethan Furman
On 02/09/2015 01:28 PM, Benjamin Peterson wrote:
 On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:

 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.

 The issue tracker http://bugs.python.org/issue2292  has  a working patch.
 Would someone be able to review it?
 
 The PEP is not even accepted.

I believe somebody (Guido?) commented Why worry about accepting the PEP when 
there's no working patch?  -- or
something to that effect.

--
~Ethan~



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Benjamin Peterson


On Mon, Feb 9, 2015, at 16:32, Guido van Rossum wrote:
 FWIW, I've encouraged Neil and others to complete this code as a
 prerequisite for a code review (but I can't review it myself). I am
 mildly
 in favor of the PEP -- if the code works and looks maintainable I would
 accept it. (A few things got changed in the PEP as a result of the work.)

In a way, it's a simplification, since functions are now simply called
with a sequence of generalized arguments; there's no privileged kwarg
or vararg. Of course, I wonder how much of f(**w, x, y, *k, *b, **d, c)
we would see...
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Benjamin Peterson


On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
 Hello all,
 
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.
 
 The issue tracker http://bugs.python.org/issue2292  has  a working patch.
 Would someone be able to review it?

The PEP is not even accepted.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject) PEP 448

2015-02-09 Thread Antoine Pitrou
On Mon, 9 Feb 2015 16:06:20 -0500
Neil Girdhar mistersh...@gmail.com wrote:
 Hello all,
 
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.

To be clear, the PEP will probably be useful for one single line of
Python code every 1. This is a very weak case for such an intrusive
syntax addition. I would support the PEP if it only added the simple
cases of tuple unpacking, left alone function call conventions, and
didn't introduce **-unpacking.

Barring that, I really don't want to review the patch and I'm a rather
decided -1 on the current PEP.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject) PEP 448

2015-02-09 Thread Victor Stinner
2015-02-10 0:51 GMT+01:00 Antoine Pitrou solip...@pitrou.net:
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.

 To be clear, the PEP will probably be useful for one single line of
 Python code every 1.

@Neil: Can you maybe show us some examples of usage of the PEP 448 in
the Python stdlib? I mean find some functions where using the PEP
would be useful and show the code before/after (maybe in a code
review). It would help to get a better opinion on the PEP. I'm not
sure that examples in the PEP are the most revelant.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Donald Stufft

 On Feb 9, 2015, at 4:06 PM, Neil Girdhar mistersh...@gmail.com wrote:
 
 Hello all,
 
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/ 
 https://www.python.org/dev/peps/pep-0448/) is implemented now based on some 
 early work by Thomas Wouters (in 2008) and Florian Hahn (2013) and recently 
 completed by Joshua Landau and me.
 
 The issue tracker http://bugs.python.org/issue2292 
 http://bugs.python.org/issue2292  has  a working patch.  Would someone be 
 able to review it?
 

I just skimmed over the PEP and it seems like it’s trying to solve a few 
different things:

* Making it easy to combine multiple lists and additional positional args into 
a function call
* Making it easy to combine multiple dicts and additional keyword args into a 
functional call
* Making it easy to do a single level of nested iterable flatten.

Looking at the syntax in the PEP I had a hard time detangling what exactly it 
was doing even with reading the PEP itself. I wonder if there isn’t a way to 
combine simpler more generic things to get the same outcome.

Looking at the Making it easy to combine multiple lists and additional 
positional args into a  function call aspect of this, why is:

print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?

That's already doable in Python right now and doesn't require anything new to 
handle it.

Looking at the making it easy to do a single level of nsted iterable 
'flatten' aspect of this, the example of:

 ranges = [range(i) for i in range(5)]
 [*item for item in ranges]
[0, 0, 1, 0, 1, 2, 0, 1, 2, 3]

Conceptually a list comprehension like [thing for item in iterable] can be 
mapped to a for loop like this:

result = []
for item in iterable:
result.append(thing)

However [*item for item in ranges] is mapped more to something like this:

result = []
for item in iterable:
result.extend(*item)

I feel like switching list comprehensions from append to extend just because of 
a * is really confusing and it acts differently than if you just did *item 
outside of a list comprehension. I feel like the itertools.chain() way of doing 
this is *much* clearer.

Finally there's the make it easy to combine multiple dicts into a function 
call aspect of this. This I think is the biggest thing that this PEP actually 
adds, however I think it goes around it the wrong way. Sadly there is nothing 
like [1] + [2] for dictionaries. The closest thing is:

kwargs = dict1.copy()
kwargs.update(dict2)
func(**kwargs)

So what I wonder is if this PEP wouldn't be better off just using the existing 
methods for doing the kinds of things that I pointed out above, and instead 
defining + or | or some other symbol for something similar to [1] + [2] but for 
dictionaries. This would mean that you could simply do:

func(**dict1 | dict(y=1) | dict2)

instead of

dict(**{'x': 1}, y=2, **{'z': 3})

I feel like not only does this genericize way better but it limits the impact 
and new syntax being added to Python and is a ton more readable.


---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Nick Coghlan
On 10 Feb 2015 08:13, Benjamin Peterson benja...@python.org wrote:



 On Mon, Feb 9, 2015, at 16:34, Ethan Furman wrote:
  On 02/09/2015 01:28 PM, Benjamin Peterson wrote:
   On Mon, Feb 9, 2015, at 16:06, Neil Girdhar wrote:
  
   The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
   implemented now based on some early work by Thomas Wouters (in 2008)
and
   Florian Hahn (2013) and recently completed by Joshua Landau and me.
  
   The issue tracker http://bugs.python.org/issue2292  has  a working
patch.
   Would someone be able to review it?
  
   The PEP is not even accepted.
 
  I believe somebody (Guido?) commented Why worry about accepting the PEP
  when there's no working patch?  -- or
  something to that effect.

 On the other hand, I'd rather not do detailed reviews of patches that
 won't be accepted. :)

It's more a matter of the PEP being acceptable in principle, but a
reference implementation being needed to confirm feasibility and to iron
out corner cases.

For example, the potential for arcane call arguments suggests the need for
a PEP 8 addition saying first standalone args, then iterable expansions,
then mapping expansions, even though syntactically any order would now be
permitted at call time.

Cheers,
Nick.

 ___
 Python-Dev mailing list
 Python-Dev@python.org
 https://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
https://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
Right,

Just to be clear though:  **-args must follow any *-args and position
arguments.  So at worst, your example is:

f(x, y, *k, *b, c,  **w, **d)

Best,

Neil

On Mon, Feb 9, 2015 at 5:10 PM, Benjamin Peterson benja...@python.org
wrote:



 On Mon, Feb 9, 2015, at 16:32, Guido van Rossum wrote:
  FWIW, I've encouraged Neil and others to complete this code as a
  prerequisite for a code review (but I can't review it myself). I am
  mildly
  in favor of the PEP -- if the code works and looks maintainable I would
  accept it. (A few things got changed in the PEP as a result of the work.)

 In a way, it's a simplification, since functions are now simply called
 with a sequence of generalized arguments; there's no privileged kwarg
 or vararg. Of course, I wonder how much of f(**w, x, y, *k, *b, **d, c)
 we would see...

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Antoine Pitrou
On Tue, 10 Feb 2015 08:43:53 +1000
Nick Coghlan ncogh...@gmail.com wrote:
 
 For example, the potential for arcane call arguments suggests the need for
 a PEP 8 addition saying first standalone args, then iterable expansions,
 then mapping expansions, even though syntactically any order would now be
 permitted at call time.

There are other concerns:

- inspect.signature() must be updated to cover the new call
  possibilities

- function call performance must not be crippled by the new
  possibilities

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Benjamin Peterson


On Mon, Feb 9, 2015, at 17:12, Neil Girdhar wrote:
 Right,
 
 Just to be clear though:  **-args must follow any *-args and position
 arguments.  So at worst, your example is:
 
 f(x, y, *k, *b, c,  **w, **d)
 
 Best,

Ah, I guess I was confused by this sentence in the PEP:  Function calls
currently have the restriction that keyword arguments must follow
positional arguments and ** unpackings must additionally follow *
unpackings.

That suggests that that rule is going to change.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
That wording is my fault.  I'll update the PEP to remove the word
currently after waiting a bit to see if there are any other problems.

Best,

Neil

On Mon, Feb 9, 2015 at 6:16 PM, Benjamin Peterson benja...@python.org
wrote:



 On Mon, Feb 9, 2015, at 17:12, Neil Girdhar wrote:
  Right,
 
  Just to be clear though:  **-args must follow any *-args and position
  arguments.  So at worst, your example is:
 
  f(x, y, *k, *b, c,  **w, **d)
 
  Best,

 Ah, I guess I was confused by this sentence in the PEP:  Function calls
 currently have the restriction that keyword arguments must follow
 positional arguments and ** unpackings must additionally follow *
 unpackings.

 That suggests that that rule is going to change.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Victor Stinner
Hi,

2015-02-09 22:06 GMT+01:00 Neil Girdhar mistersh...@gmail.com:
 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.

I don't like this PEP. IMO it makes the Python syntax more complex and
more difficult to read.

Extract of the PEP:
 Current usage of the * iterable unpacking operator features unnecessary 
 restrictions that can harm readability.

Yes, the current syntax is more verbose, but it's simpler to
understand and simpler to debug.

--

Example:

 ranges = [range(i) for i in range(5)]
 [*item for item in ranges]
[0, 0, 1, 0, 1, 2, 0, 1, 2, 3]

I don't understand this code.

It looks like you forgot something before *item, I would expect 2*item
for example.

If it's really to unpack something, I still don't understand the
syntax. Does * apply to item or to the whole item for item in
ranges? It's not clear to me. If it applies to the whole generator,
the syntax is really strange and I would expect parenthesis: [*(item
for item in ranges)].

--

 function(**kw_arguments, **more_arguments)

If the key key1 is in both dictionaries, more_arguments wins, right?


I never suffered of the lack of the PEP 448. But I remember that a
friend learning Python asked me that * and ** are limited to
functions. I had no answer. The answer is maybe to keep the language
simple? :-)

I should maybe read the PEP one more time and think about it.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2015-02-09 Thread Neil Girdhar
On Mon, Feb 9, 2015 at 7:53 PM, Donald Stufft don...@stufft.io wrote:


 On Feb 9, 2015, at 7:29 PM, Neil Girdhar mistersh...@gmail.com wrote:

 For some reason I can't seem to reply using Google groups, which is is
 telling this is a read-only mirror (anyone know why?)  Anyway, I'm going
 to answer as best I can the concerns.

 Antoine said:

 To be clear, the PEP will probably be useful for one single line of
 Python code every 1. This is a very weak case for such an intrusive
 syntax addition. I would support the PEP if it only added the simple
 cases of tuple unpacking, left alone function call conventions, and
 didn't introduce **-unpacking.


 To me this is more of a syntax simplification than a syntax addition.  For
 me the **-unpacking is the most useful part. Regarding utility, it seems
 that a many of the people on r/python were pretty excited about this PEP:
 http://www.reddit.com/r/Python/comments/2synry/so_8_peps_are_currently_being_proposed_for_python/

 —

 Victor noticed that there's a mistake with the code:

  ranges = [range(i) for i in range(5)]
  [*item for item in ranges]
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]


 It should be a range(4) in the code.  The * applies to only item.  It is
 the same as writing:

 [*range(0), *range(1), *range(2), *range(3), *range(4)]

 which is the same as unpacking all of those ranges into a list.

  function(**kw_arguments, **more_arguments)
 If the key key1 is in both dictionaries, more_arguments wins, right?


 There was some debate and it was decided that duplicate keyword arguments
 would remain an error (for now at least).  If you want to merge the
 dictionaries with overriding, then you can still do:

 function(**{**kw_arguments, **more_arguments})

 because **-unpacking in dicts overrides as you guessed.

 —



 On Mon, Feb 9, 2015 at 7:12 PM, Donald Stufft don...@stufft.io wrote:


 On Feb 9, 2015, at 4:06 PM, Neil Girdhar mistersh...@gmail.com wrote:

 Hello all,

 The updated PEP 448 (https://www.python.org/dev/peps/pep-0448/) is
 implemented now based on some early work by Thomas Wouters (in 2008) and
 Florian Hahn (2013) and recently completed by Joshua Landau and me.

 The issue tracker http://bugs.python.org/issue2292  has  a working
 patch.  Would someone be able to review it?


 I just skimmed over the PEP and it seems like it’s trying to solve a few
 different things:

 * Making it easy to combine multiple lists and additional positional args
 into a function call
 * Making it easy to combine multiple dicts and additional keyword args
 into a functional call
 * Making it easy to do a single level of nested iterable flatten.


 I would say it's:
 * making it easy to unpack iterables and mappings in function calls
 * making it easy to unpack iterables  into list and set displays and
 comprehensions, and
 * making it easy to unpack mappings into dict displays and comprehensions.




 Looking at the syntax in the PEP I had a hard time detangling what
 exactly it was doing even with reading the PEP itself. I wonder if there
 isn’t a way to combine simpler more generic things to get the same outcome.

 Looking at the Making it easy to combine multiple lists and additional
 positional args into a  function call aspect of this, why is:

 print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?

 That's already doable in Python right now and doesn't require anything
 new to handle it.


 Admittedly, this wasn't a great example.  But, if [1] and [2] had been
 iterables, you would have to cast each to list, e.g.,

 accumulator = []
 accumulator.extend(a)
 accumulator.append(b)
 accumulator.extend(c)
 print(*accumulator)

 replaces

 print(*a, b, *c)

 where a and c are iterable.  The latter version is also more efficient
 because it unpacks only a onto the stack allocating no auxilliary list.


 Honestly that doesn’t seem like the way I’d write it at all, if they might
 not be lists I’d just cast them to lists:

 print(*list(a) + [b] + list(c))


Sure, that works too as long as you put in the missing parentheses.



 But if casting to list really is that big a deal, then perhaps a better
 solution is to simply make it so that something like ``a_list +
 an_iterable`` is valid and the iterable would just be consumed and +’d onto
 the list. That still feels like a more general solution and a far less
 surprising and easier to read one.


I understand.  However I just want to point out that 448 is more general.
There is no binary operator for generators.  How do you write (*a, *b,
*c)?  You need to use itertools.chain(a, b, c).






 Looking at the making it easy to do a single level of nsted iterable
 'flatten' aspect of this, the example of:

  ranges = [range(i) for i in range(5)]
  [*item for item in ranges]
 [0, 0, 1, 0, 1, 2, 0, 1, 2, 3]

 Conceptually a list comprehension like [thing for item in iterable] can
 be mapped to a for loop like this:

 result = []
 for item in iterable:
 result.append(thing)

 However [*item for item in ranges] is 

Re: [Python-Dev] (no subject)

2015-02-09 Thread Greg Ewing

Donald Stufft wrote:


why is:

print(*[1], *[2], 3) better than print(*[1] + [2] + [3])?


It could potentially be a little more efficient by
eliminating the construction of an intermediate list.

defining + or | or some other symbol for something similar 
to [1] + [2] but for dictionaries. This would mean that you could simply do:


func(**dict1 | dict(y=1) | dict2)


Same again, multiple ** avoids construction of an
itermediate dict.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2012-04-19 Thread Martin v. Löwis
Am 19.04.2012 10:00, schrieb Eric Snow:
 How closely is tokenize.detect_encoding() supposed to match
 PyTokenizer_FindEncoding()?  From what I can tell, there is a subtle
 difference in their behavior that has bearing on PEP 263 handling
 during import. [1]  Should any difference be considered a bug, or
 should I work around it?  Thanks.

If there is such a difference, it's a bug. The authority should be the
PEP.

Regards,
Martin
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2011-12-12 Thread Guido van Rossum
The authors are definitely interested in feedback! Best probably to
post it to my G+ thread.

On Mon, Dec 12, 2011 at 1:44 PM, Eric Snow ericsnowcurren...@gmail.com wrote:
 Guido posted this on Google+:

 IEEE/ISO are working on a draft document about Python vulunerabilities: 
 http://grouper.ieee.org/groups/plv/DocLog/300-399/360-thru-379/22-WG23-N-0372/n0372.pdf
  (in the context of a larger effort to classify vulnerabilities in all 
 languages: ISO/IEC TR 24772:2010, available from ISO at no cost at: 
 http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html (its 
 link is near the bottom of the web page).

 Will this document have a broad use, such that we should make sure it
 is accurate (to avoid any future confusion)?  I skimmed through and
 found that it covers a lot of ground, not necessarily about
 vulnerabilities, with some inaccuracies but not a ton that I noticed.
 If it doesn't matter then no big deal.  Just thought I'd bring it up.

 -eric
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/guido%40python.org



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2011-01-24 Thread Oleg Broytman
On Mon, Jan 24, 2011 at 04:39:54PM +, Stefan Spoettl wrote:
 So it may be that the Python interpreter isn't working correctly onlyon 
 Ubuntu 10.10

   Than you should report the problem to the Ubuntu developers, right?
And it would be nice if you investigate deeper and send a proper mail -
with a subject, with a properly formatted text, not html.

http://www.catb.org/~esr/faqs/smart-questions.html

Oleg.
-- 
 Oleg Broytmanhttp://phdru.name/p...@phdru.name
   Programmers don't die, they just GOSUB without RETURN.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2011-01-24 Thread Brett Cannon
Bug reports should be filed at bugs.python.org

On Mon, Jan 24, 2011 at 08:39, Stefan Spoettl spoe...@hotmail.com wrote:
 Using:
 Python 2.7.0+ (r27:82500, Sep 15 2010, 18:14:55)
 [GCC 4.4.5] on linux2
 (Ubuntu 10.10)
 Method to reproduce error:
 1. Defining a module which is later imported by another:
 -
 class SomeThing:
     def __init__(self):
         self.variable = 'Where is my bytecode?'
     def deliver(self):
         return self.variable

 if __name__ == '__main__':
     obj = SomeThing()
     print obj.deliver()
 -
 2. Run this module:
 Output of the Python Shell: Where is my bytecode?
                                                    
 3. Defining the importing module:
 -
 class UseSomeThing:
     def __init__(self, something):
         self.anything = something
     def giveanything(self):
         return self.anything

 if __name__ == '__main__':
     anything = UseSomeThing(SomeThing.SomeThing().deliver()).giveanything()
     print anything
 -
 4. Run this module:
 Output of the Python Shell: Where is my bytecode
                                                     
 (One can find SomeThing.pyc on the disc.)
 5. Changing the imported module:
 -
 class SomeThing:
     def __init__(self):
         self.variable = 'What the hell is this? It could not be Python!'
     def deliver(self):
         return self.variable

 if __name__ == '__main__':
     obj = SomeThing()
     print obj.deliver()
 -
 6. Run the changed module:
 Output of the Python Shell: What the hell is this? It could not be Python!
                                                    
 7. Run the importing module again:
 Output of the Python Shell: Where is my bytecode?
                                                    
 8. Deleting the bytecode of the imported module makes no effect!
 Remark: I think that I have observed yesterday late night a similar effect
 on Windows XP
 with Python 2.7.1 and Python 3.1.3. But when I have tried it out today in
 the morning the
 error hasn't appeared. So it may be that the Python interpreter isn't
 working correctly only
 on Ubuntu 10.10.
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe:
 http://mail.python.org/mailman/options/python-dev/brett%40python.org


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2010-09-23 Thread Laurens Van Houtven
Hi!


This mailing list is about developing Python itself, not about developing
*in* Python or debugging Python installations.

Try seeing help elsewhere, such as on the comp.lang.python newsgroup,
#python IRC channel on freenode, or other sources (check the wiki if you
need any others).

Thanks,
lvh
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2010-09-23 Thread Oleg Broytman
Hello.

   We are sorry but we cannot help you. This mailing list is to work on
developing Python (adding new features to Python itself and fixing bugs);
if you're having problems learning, understanding or using Python, please
find another forum. Probably python-list/comp.lang.python mailing list/news
group is the best place; there are Python developers who participate in it;
you may get a faster, and probably more complete, answer there. See
http://www.python.org/community/ for other lists/news groups/fora. Thank
you for understanding.

On Thu, Sep 23, 2010 at 05:26:21PM +0530, Ketan Vijayvargiya wrote:
 Hi.
 I have an issue which has been annoying me from quite sometime and any help
 would be greatly appreciated:
 
 I just installed Python 2.6 on my centOS 5 system and then I installed nltk.
 Now I am running a certain python script and it gives me this error-
 ImportError: No module named _sqlite3
 Searching the internet tells me that sqlite should be installed on my system
 which is confirmed when I try to install it using yum. Yum tells me that all
 of the following are installed on my system-
 sqlite-devel.i386
 sqlite.i386
 python-sqlite.i386
 Can anyone tell me where I am going wrong.

Oleg.
-- 
 Oleg Broytmanhttp://phd.pp.ru/p...@phd.pp.ru
   Programmers don't die, they just GOSUB without RETURN.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2010-03-15 Thread David Beazley


On Mon 15/03/10  4:34 AM , Martin v. Löwis mar...@v.loewis.de sent:
  So, just to be clear about the my bug report, it
 is directly related to the problem of overlapping I/O requests with
 CPU-bound processing. This kind of scenario comes up in the context of
 many applications--especially those based on
 cooperating processes, multiprocessing, and message passing.
 
 How so? if you have cooperating processes, multiprocessing, and message
 passing, you *don't* have CPU bound threads along with IO
 boundthreads in the same process - you may not have threads at all!!!
 

You're right in that end user will probably not be using threads in this 
situation.  However, threads are still often used inside the associated 
programming 
libraries and frameworks that provide communication.   For example, threads 
might be used to implement queuing, background monitoring, event notification, 
routing, and other similar low-level features.   Just as an example, the 
multiprocessing module currently uses background threads as part of its 
implementation 
of queues.   In a similar vein, threads are sometimes used in asynchronous I/O 
frameworks (e.g., Twisted) to handle certain kinds of deferred operations.  
Bottom line: just because a user isn't directly programming with threads 
doesn't mean that threads aren't there.

  In any case, if the GIL can be improved in a way
 that is simple and which either improves or doesn't negatively
 impact the performance of existing applications, why wouldn't you want to
 do it?  Seems like a no-brainer.
 
 Unfortunately, a simple improvement doesn't really seem to exist.
 

Well, I think this problem can be fixed in a manner that is pretty well 
isolated to just one specific part of Python (the GIL)--especially since 
several prototypes, 
including my own, have already demonstrated that it's possible.In any case, 
as it stands now, the current new GIL ignores about 40 years of work 
concerning 
operating system thread/process scheduling and the efficient handling of I/O 
requests.   I suspect that I'm not the only one who be disappointed if the 
Python 
community simply blew it off and said it's not an issue.Trying to fix it is 
a worthwhile goal.

Cheers,
Dave



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2009-10-29 Thread Antoine Pitrou
Martin v. Löwis martin at v.loewis.de writes:
 
  What do you think of creating a buildbot category in the tracker? There 
  are
  often problems on specific buildbots which would be nice to track, but
there's
  nowhere to do so.
 
 Do you have any specific reports that you would want to classify with
 this category?

I was thinking of http://bugs.python.org/issue4970

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2007-04-30 Thread Greg Ewing
JOSHUA ABRAHAM wrote:
 I was hoping you guys would consider creating  function in os.path or 
 otherwise that would find the full path of a file when given only it's base 
 name and nothing else.I have been made to understand that this is not 
 currently possible.

Does os.path.abspath() do what you want?

If not, what exactly *do* you want?

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2007-04-30 Thread Mike Klaas
On 4/30/07, Greg Ewing [EMAIL PROTECTED] wrote:
 JOSHUA ABRAHAM wrote:
  I was hoping you guys would consider creating  function in os.path or
  otherwise that would find the full path of a file when given only it's base
  name and nothing else.I have been made to understand that this is not
  currently possible.

 Does os.path.abspath() do what you want?

 If not, what exactly *do* you want?

probably:

def find_in_path(filename):
 for path in os.environ['PATH'].split(os.pathsep):
   if os.path.exists(filename):
return os.path.abspath(os.path.join(path, filename))

-Mike
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-11-28 Thread Guido van Rossum
On 11/24/05, Duncan Grisby [EMAIL PROTECTED] wrote:
 Hi,

 I posted this to comp.lang.python, but got no response, so I thought I
 would consult the wise people here...

 I have encountered a problem with the re module. I have a
 multi-threaded program that does lots of regular expression searching,
 with some relatively complex regular expressions. Occasionally, events
 can conspire to mean that the re search takes minutes. That's bad
 enough in and of itself, but the real problem is that the re engine
 does not release the interpreter lock while it is running. All the
 other threads are therefore blocked for the entire time it takes to do
 the regular expression search.

Rather than trying to fight the GIL, I suggest that you let a regex
expert look at your regex(es) and the input that causes the long
running times. As Fredrik suggested, certain patterns are just
inefficient but can be rewritten more efficiently. There are plenty of
regex experts on c.l.py.

Unless you have a multi-CPU box, the performance of your app isn't
going to improve by releasing the GIL -- it only affects the
responsiveness of other threads.

--
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-11-24 Thread Donovan Baarda
On Thu, 2005-11-24 at 14:11 +, Duncan Grisby wrote:
 Hi,
 
 I posted this to comp.lang.python, but got no response, so I thought I
 would consult the wise people here...
 
 I have encountered a problem with the re module. I have a
 multi-threaded program that does lots of regular expression searching,
 with some relatively complex regular expressions. Occasionally, events
 can conspire to mean that the re search takes minutes. That's bad
 enough in and of itself, but the real problem is that the re engine
 does not release the interpreter lock while it is running. All the
 other threads are therefore blocked for the entire time it takes to do
 the regular expression search.

I don't know if this will help, but in my experience compiling re's
often takes longer than matching them... are you sure that it's the
match and not a compile that is taking a long time? Are you using
pre-compiled re's or are you dynamically generating strings and using
them?

 Is there any fundamental reason why the re module cannot release the
 interpreter lock, for at least some of the time it is running?  The
 ideal situation for me would be if it could do most of its work with
 the lock released, since the software is running on a multi processor
 machine that could productively do other work while the re is being
 processed. Failing that, could it at least periodically release the
 lock to give other threads a chance to run?
 
 A quick look at the code in _sre.c suggests that for most of the time,
 no Python objects are being manipulated, so the interpreter lock could
 be released. Has anyone tried to do that?

probably not... not many people would have several-minutes-to-match
re's.

I suspect it would be do-able... I suggest you put together a patch and
submit it on SF...


-- 
Donovan Baarda [EMAIL PROTECTED]
http://minkirri.apana.org.au/~abo/

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-11-24 Thread Fredrik Lundh
Donovan Baarda wrote:

 I don't know if this will help, but in my experience compiling re's
 often takes longer than matching them... are you sure that it's the
 match and not a compile that is taking a long time? Are you using
 pre-compiled re's or are you dynamically generating strings and using
 them?

patterns with nested repeats can behave badly on certain types of non-
matching input. (each repeat is basically a loop, and if you nest enough
loops things can quickly get out of hand, even if the inner loop doesn't
do much...)

/F 



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-03-16 Thread Nick Coghlan
Phillip J. Eby wrote:
At 10:36 PM 3/15/05 +1000, Nick Coghlan wrote:
Does deciding whether or not to supply the function really need to be 
dependent on whether or not a format for __signature__ has been chosen?
Um, no.  Why would you think that?
Pronoun confusion. I interpreted an 'it' in your message as referring to 
update_meta, but I now realise you only meant __signature__ :)

Cheers,
Nick
--
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
http://boredomandlaziness.skystorm.net
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-03-15 Thread Phillip J. Eby
At 10:36 PM 3/15/05 +1000, Nick Coghlan wrote:
Phillip J. Eby wrote:
I discussed this approach with Guido in private e-mail a few months back 
during discussion about an article I was writing for DDJ about 
decorators.  We also discussed something very similar to 'update_meta()', 
but never settled on a name.  Originally he wanted me to PEP the whole 
thing, but he wanted it to include optional type declaration info, so you 
can probably see why I haven't done anything on that yet.  :)
However, if we can define a __signature__ format that allows for type 
declaration, I imagine there'd be little problem with moving forward on it.
But one of the reasons for providing 'update_meta' is so that additional 
metadata (like __signature__) can later be added transparently.
Yes, exactly.

Does deciding whether or not to supply the function really need to be 
dependent on whether or not a format for __signature__ has been chosen?
Um, no.  Why would you think that?
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-03-14 Thread Phillip J. Eby
At 05:34 AM 3/14/05 -0800, Michael Chermside wrote:
Nice... thanks. But I have to ask: is this really the right set of metadata to
be updating? Here are a few things that perhaps ought be copied by 
update_meta:

f.__name__ (already included)
f.__doc__  (already included)
f.__dict__ (already included)
f.__module__   (probably should include)
f.func_code.co_filename (to match f.__name__, but I'd leave it alone)
Leave __module__ alone, too, unless you want to play havoc with any 
inspection tools looking for the source code.


there's also the annoying fact that in IDLE (and in some other python-aware
IDEs) one can see the argument signature for a function as a tool tip
or other hint. Very handy that, but if a decorator is applied then all
you will see is func(*args, **kwargs) which is less than helpful. I'm
not sure whether this CAN be duplicated... I believe it is generated by
examining the following:
f.func_code.co_argcount
f.func_code.co_varnames
f.func_code.co_flags  0x4
f.func_code.co_flags  0x8
...and I suspect (experimentation seems to confirm this) that if you mangle
these then the code object won't work correctly. If anyone's got a
suggestion for fixing this, I'd love to hear it.
One solution is to have a __signature__ attribute that's purely 
documentary.  That is, modifying it wouldn't change the function's actual 
behavior, so it could be copied with update_meta() but then modified by the 
decorator if need be.  __signature__ would basically be a structure like 
the one returned by inspect.getargspec().  Also, 'instancemethod' would 
have a __signature__ equal to its im_func.__signature__ with the first 
argument dropped off, thus making it easy to introspect wrapper chains.

I discussed this approach with Guido in private e-mail a few months back 
during discussion about an article I was writing for DDJ about 
decorators.  We also discussed something very similar to 'update_meta()', 
but never settled on a name.  Originally he wanted me to PEP the whole 
thing, but he wanted it to include optional type declaration info, so you 
can probably see why I haven't done anything on that yet.  :)

However, if we can define a __signature__ format that allows for type 
declaration, I imagine there'd be little problem with moving forward on it.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-03-14 Thread Thomas Heller
Phillip J. Eby [EMAIL PROTECTED] writes:

 At 05:34 AM 3/14/05 -0800, Michael Chermside wrote:
Nice... thanks. But I have to ask: is this really the right set of metadata to
 be updating? Here are a few things that perhaps ought be copied by
 update_meta:

 f.__name__ (already included)
 f.__doc__  (already included)
 f.__dict__ (already included)
 f.__module__   (probably should include)
 f.func_code.co_filename (to match f.__name__, but I'd leave it alone)

 Leave __module__ alone, too, unless you want to play havoc with any
 inspection tools looking for the source code.


there's also the annoying fact that in IDLE (and in some other python-aware
IDEs) one can see the argument signature for a function as a tool tip
or other hint. Very handy that, but if a decorator is applied then all
you will see is func(*args, **kwargs) which is less than helpful. I'm
not sure whether this CAN be duplicated... I believe it is generated by
examining the following:

 f.func_code.co_argcount
 f.func_code.co_varnames
 f.func_code.co_flags  0x4
 f.func_code.co_flags  0x8

...and I suspect (experimentation seems to confirm this) that if you mangle
these then the code object won't work correctly. If anyone's got a
suggestion for fixing this, I'd love to hear it.

 One solution is to have a __signature__ attribute that's purely
 documentary.  That is, modifying it wouldn't change the function's
 actual behavior, so it could be copied with update_meta() but then
 modified by the decorator if need be.  __signature__ would basically
 be a structure like the one returned by inspect.getargspec().  Also,
 'instancemethod' would have a __signature__ equal to its
 im_func.__signature__ with the first argument dropped off, thus making
 it easy to introspect wrapper chains.

Another possibility (ugly, maybe) would be to create sourcecode with the
function signature that you need, and compile it. inspect.getargspec() and
inspect.formatargspec can do most of the work.

Thomas

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-03-14 Thread Phillip J. Eby
At 04:35 PM 3/14/05 +0100, Thomas Heller wrote:
Another possibility (ugly, maybe) would be to create sourcecode with the
function signature that you need, and compile it. inspect.getargspec() and
inspect.formatargspec can do most of the work.
I've done exactly that, for generic functions in PyProtocols.  It's *very* 
ugly, and not something I'd wish on anyone needing to write a 
decorator.  IMO, inspect.getargspec() shouldn't need to be so complicated; 
it should just return an object's __signature__ in future Pythons.

Also, the 'object' type should have a __signature__ descriptor that returns 
the __signature__ of __call__, if present.  And types should have a 
__signature__ that returns the __init__ or __new__ signature of the 
type.  Finally, C methods should have a way to define a __signature__ as well.

At that point, any callable object has an introspectable __signature__, 
which would avoid the need for every introspection framework or 
documentation tool having to rewrite the same old type dispatching code to 
check if it's an instancemethod, an instance with a __call__, a type, etc. 
etc. in order to find the real function and how to modify what getargspec() 
returns.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] (no subject)

2005-03-14 Thread Brett C.
Phillip J. Eby wrote:
[SNIP]
One solution is to have a __signature__ attribute that's purely 
documentary.  That is, modifying it wouldn't change the function's 
actual behavior, so it could be copied with update_meta() but then 
modified by the decorator if need be.  __signature__ would basically be 
a structure like the one returned by inspect.getargspec().  Also, 
'instancemethod' would have a __signature__ equal to its 
im_func.__signature__ with the first argument dropped off, thus making 
it easy to introspect wrapper chains.

I discussed this approach with Guido in private e-mail a few months back 
during discussion about an article I was writing for DDJ about 
decorators.  We also discussed something very similar to 
'update_meta()', but never settled on a name.  Originally he wanted me 
to PEP the whole thing, but he wanted it to include optional type 
declaration info, so you can probably see why I haven't done anything on 
that yet.  :)

However, if we can define a __signature__ format that allows for type 
declaration, I imagine there'd be little problem with moving forward on it.

It could be as simple as just a bunch of tuples.  The following (assuming *args 
and **kwargs are not typed; don't remember if they can be)::

  def foo(pos1, pos2:int, key1=hi, key2=42:int, *args, **kwargs): pass
could be::
  ((('pos1', None), ('pos2', int)), (('key1', hi, None), ('key2', 42, int)),
   'args', 'kwargs')
In case the format is not obvious, just a bunch of tuples grouped based on 
whether they are positional, keyword, or one of the collecting arguments.  For 
positional arguments, have a two-item tuple consisting of the argument name and 
the possible type.  For keyword, 3-item tuple with name, default value, and 
possible type.  Lacking *args and/or **kwargs could just be set to None for 
those tuple positions.

Since this is mainly for introspection tools the format can stand to be verbose 
and not the easiest thing to read by eye, but it must contain all possible info 
on the arguments.  And if actual execution does not use this slot, as Phillip 
is suggesting, but it is only for informative purposes we could make it 
optional.  It could also be set it to a descriptor that dynamically creates the 
tuple when called based on the function passed into the descriptor at creation 
time.  This could be rolled into the update_meta (or whatever it ends up being 
called) function.

-Brett
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com