Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Jack Diederich
On Tue, Aug 30, 2016 at 11:03 PM, Guido van Rossum <gu...@python.org> wrote:

> On Tue, Aug 30, 2016 at 7:44 PM, Jack Diederich <jackd...@gmail.com>
> wrote:
> > +0. We should try and be consistent even if this is a thing I don't want.
> > And trust me, I don't!
>
> No problem. You won't have to!
>
>
Yes! I don't have to want it, it is here!


> > That said, as long as pro-mypy people are willing to make everyone else
> pay
> > a mypy reading tax for code let's try and reduce the cognitive burden.
> >
> > * Duplicate type annotations should be a syntax error.
> >   Duplicate annotations aren't possible in functions so that wasn't an
> issue
> > in 484. 526 makes some things syntax errors and some things runtime
> errors
> > (for good reason -- function bodies aren't evaluated right away).
> > Double-annotating a variable is something we can figure out at compile
> time
> > and doing the double annotating is non-sensical so we should error on it
> > because we can.
>
> Actually I'm not so sure that double-annotating is always nonsensical.
> In the mypy tracker we're seeing some requests for type *inference*
> that allows a variable to be given another type later, e.g.
>
> x = 'abc'
> test_func(x)
> x = 42
> another_test_func(x)
>
> Maybe there's a use for explicit annotations too. I would rather not
> get in the way of letting type checkers decide such semantics.
>
>
Other languages (including rpython) don't allow rebinding types (or
sometimes even re-assignment to same type). We are going for clarity [and
bondage, and discipline]. If we are doing types let's do types like other
people do. I think *disallowing* redefining the type is general to
enforcing types. +1 on being consistent with other langs. If plain
redoubling of types is allowed I'm OK "i: int = 0" doesn't summon horrors
when said three times into a mirror. But we can't always know what "int"
evaluates to so I'd just disallow it.


> > *  Dissallowing annotations on global and nonlocal
> >   Agreed, allowing it would be confusing because it would either be a
> > re-definition or a hard to read annotation-at-a-distance.
> >
> > * Where __annotations__ live
> >   It is strange to allow modules.__annotations__ and
> MyClass.__annotations__
> > but not myfunc.__annotations__ (or more in line with existing function
> > implementations a myfunc.__code__.co_annotations). If we know enough
> from
> > the syntax parse to have func.__code__.co_varnames be known then we
> should
> > try to do that with annotations.  Let's raise a SyntaxError for function
> > body annotations that conflict with same-named variables that are
> annotated
> > in the function signature as well.
>
> But myfunc.__annotations__ already exists -- PEP 3107 puts the
> signature annotations there. The problem with co_annotations is that
> annotations are evaluated (they can be quite complex expressions, e.g.
> Optional[Tuple[int, int, some_mod.SomeClass]]), while co_varnames is
> just a list of strings. And code objects must be immutable. The issue
> with rejecting duplicate annotations so sternly is the same as for the
> previous bullet.
>
>
If we disallow re-assignment of types as a syntax error then the conflict
with myfunc.__annotations__ goes away for vars that share a name with the
function arguments. The fact that variables with types can't be known until
the function body executes a particular line is .. I'm not sure how to deal
with that. For modules and classes you can assert that the body at the top
indent level has been executed. For functions you can only assert that it
has been parsed. So myfunc.__annotations__ could say that the type has a
definition but only later know what the definition is.

> I did C++ for years before I did Python and wrote C++ in many languages
> > (including Python). So ideally I'm -1000 on all this stuff for cultural
> > reasons -- if you let a C++ person add types they will for false comfort.
> > But again, I'm +0 on this specific proposal because we have already gone
> > down the garden path.
>
> As long as you run mypy the comfort shouldn't be false. (But your
> starting with C++ before Python explains a lot. :-)
>

We've talked about this and we have different relationships with tools. I'm
a monk who thinks using a debugger is an admission of failure; you think
linters are a fine method of dissuading others of sin.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Jack Diederich
+0. We should try and be consistent even if this is a thing I don't
want. And trust me, I don't!

That said, as long as pro-mypy people are willing to make everyone else pay
a mypy reading tax for code let's try and reduce the cognitive burden.

* Duplicate type annotations should be a syntax error.
  Duplicate annotations aren't possible in functions so that wasn't an
issue in 484. 526 makes some things syntax errors and some things runtime
errors (for good reason -- function bodies aren't evaluated right away).
Double-annotating a variable is something we can figure out at compile time
and doing the double annotating is non-sensical so we should error on it
because we can.

*  Dissallowing annotations on global and nonlocal
  Agreed, allowing it would be confusing because it would either be a
re-definition or a hard to read annotation-at-a-distance.

* Where __annotations__ live
  It is strange to allow modules.__annotations__ and
MyClass.__annotations__ but not myfunc.__annotations__ (or more in line
with existing function implementations a myfunc.__code__.co_annotations).
If we know enough from the syntax parse to have func.__code__.co_varnames
be known then we should try to do that with annotations.  Let's raise a
SyntaxError for function body annotations that conflict with same-named
variables that are annotated in the function signature as well.

I did C++ for years before I did Python and wrote C++ in many languages
(including Python). So ideally I'm -1000 on all this stuff for cultural
reasons -- if you let a C++ person add types they will for false comfort.
But again, I'm +0 on this specific proposal because we have already gone
down the garden path.

-Jack


On Tue, Aug 30, 2016 at 9:00 PM, Steven D'Aprano 
wrote:

> On Tue, Aug 30, 2016 at 02:20:26PM -0700, Guido van Rossum wrote:
> > I'm happy to present PEP 526 for your collective review:
>
> Are you hoping to get this in before 3.6 beta? Because I'm not sure I
> can give this much attention before then, but I really want to.
>
>
> --
> Steve
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> jackdied%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Type hints -- a mediocre programmer's reaction

2015-04-20 Thread Jack Diederich
Twelve years ago a wise man said to me I suggest that you also propose a
new name for the resulting language

I talked with many of you at PyCon about the costs of PEP 484. There are
plenty of people who have done a fine job promoting the benefits.

* It is not optional. Please stop saying that. The people promoting it
would prefer that everyone use it. If it is approved it will be optional in
the way that PEP8 is optional. If I'm reading your annotated code it is
certainly /not/ optional that I understand the annotations.

* Uploading stubs for other people's code is a terrible idea. Who do I
contact when I update the interface to my library? The random Joe who
helped by uploading annotations three months ago and then quit the
internet? I don't even want to think about people maliciously adding stubs
to PyPI.

* The cognitive load is very high. The average function signature will
double in length. This is not a small cost and telling me it is optional
to pretend that every other word on the line doesn't exist is a farce.

* Every company's style guide is about to get much longer. That in itself
is an indicator that this is a MAJOR language change and not just some
optional add-on.

* People will screw it up. The same people who can't be trusted to program
without type annotations are also going to be *writing* those type
annotations.

* Teaching python is about to get much less attractive. It will not be
optional for teachers to say just pretend all this stuff over here doesn't
exist

* No new syntax is a lie. Or rather a red herring. There are lots of new
things it will be required to know and just because the compiler doesn't
have to change doesn't mean the language isn't undergoing a major change.

If this wasn't in a PEP and it wasn't going to ship in the stdlib very few
people would use it. If you told everyone they had to install a different
python implementation they wouldn't. This is much worse than that - it is
Python4 hidden away inside a PEP.

There are many fine languages that have sophisticated type systems. And
many bondage  discipline languages that make you type things three times
to make really really sure you meant to type that. If you find those other
languages appealing I invite you to go use them instead.

-Jack

https://mail.python.org/pipermail/python-dev/2003-February/033291.html
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] making python's c iterators picklable (http://bugs.python.org/issue14288)

2012-03-13 Thread Jack Diederich
2012/3/13 Kristján Valur Jónsson krist...@ccpgames.com:
 http://bugs.python.org/issue14288

 In my opinion, any objects that have simple and obvious pickle semantics
 should be picklable.  Iterators are just regular objects with some state.
 They are not file pointers or sockets or database cursors.  And again, I
 argue that if these objects were implemented in .py, they would already be
 automatically picklable (indeed, itertools.py was).  The detail that some
 iterators in standard python are implemented in C should not automatically
 restrict their usage for no particular reason.

+1, things that can be pickled should be pickleable.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Status of the fix for the hash collision vulnerability

2012-01-13 Thread Jack Diederich
On Thu, Jan 12, 2012 at 9:57 PM, Guido van Rossum gu...@python.org wrote:
 Hm... I started out as a big fan of the randomized hash, but thinking more
 about it, I actually believe that the chances of some legitimate app having
1000 collisions are way smaller than the chances that somebody's code will
 break due to the variable hashing.

Python's dicts are designed to avoid hash conflicts by resizing and
keeping the available slots bountiful.  1000 conflicts sounds like a
number that couldn't be hit accidentally unless you had a single dict
using a terabyte of RAM (i.e. if Titus Brown doesn't object, we're
good).   The hashes also look to exploit cache locality but that is
very unlikely to get one thousand conflicts by chance.  If you get
that many there is an attack.

 This is depending on how the counting is done (I didn't look at MAL's
 patch), and assuming that increasing the hash table size will generally
 reduce collisions if items collide but their hashes are different.

The patch counts conflicts on an individual insert and not lifetime
conflicts.  Looks sane to me.

 That said, even with collision counting I'd like a way to disable it without
 changing the code, e.g. a flag or environment variable.

Agreed.  Paranoid people can turn the behavior off and if it ever were
to become a problem in practice we could point people to a solution.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASE] Python 2.7.2 release candidate 1

2011-05-29 Thread Jack Diederich
On Sun, May 29, 2011 at 6:47 PM, Benjamin Peterson benja...@python.org wrote:
 2.7.2 is the second in bugfix release for the Python 2.7 series. 2.7 is the 
 last
 major verison of the 2.x line and will be receiving bug fixes while new 
 feature
 development focuses on 3.x.

 2.7 includes many features that were first released in Python 3.1.

It might not be clear to a casual reader that the features were
released in 2.7.0 and not 2.7.2.  We don't, but many projects do
release new features with bugfix version numbers - I'm looking at you,
Django.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Vagaries of their in English (was Re: Support the /usr/bin/python2 symlink upstream)

2011-03-04 Thread Jack Diederich
On Sat, Mar 5, 2011 at 1:00 AM, Nick Coghlan ncogh...@gmail.com wrote:
 On Sat, Mar 5, 2011 at 10:40 AM, Guido van Rossum gu...@python.org wrote:
 That's how I felt 20 years ago. But since then I've come to appreciate
 they as a much better alternative to either he or she or he. Just
 get used to it.

 If anyone wants to further explore this question, the Stack Exchange
 on English usage is a decent place to start:
 http://english.stackexchange.com/questions/192/is-it-correct-to-use-their-instead-of-his-or-her

 What it boils down to is that their is the least bad of all of the
 available options.

No wonder that thread went to 100+ replies.

Can we cut English some slack here?  Unlike most other languages we
don't assign nouns, like rocks and sofas, a gender.  Personally I'd be
happier if everyone switched to using she, instead of revolting
against the old default of using he by using the ear jarring he or
she, or the dissonant their.  For an amusing take on how a well
intentioned attempt to make the written law more pleasingly correct
see this post[1] (the ambiguous pronouns resulted in an ambiguous
statute).

English is highly mutable so I expect this will all shake out in a
generation or two.  Or maybe not - we still don't have a common word
that means a positive answer to a negative question despite centuries
of want.

-Jack

[1] 
http://volokh.com/2011/03/04/the-strange-glitch-in-the-rhode-island-rules-of-evidence/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] r88691 - python/branches/py3k/Lib/test/test_telnetlib.py

2011-02-28 Thread Jack Diederich
Much thanks.

On Mon, Feb 28, 2011 at 7:41 PM, antoine.pitrou
python-check...@python.org wrote:
 Author: antoine.pitrou
 Date: Tue Mar  1 01:41:10 2011
 New Revision: 88691

 Log:
 Endly, fix UnboundLocalError in telnetlib



 Modified:
   python/branches/py3k/Lib/test/test_telnetlib.py

 Modified: python/branches/py3k/Lib/test/test_telnetlib.py
 ==
 --- python/branches/py3k/Lib/test/test_telnetlib.py     (original)
 +++ python/branches/py3k/Lib/test/test_telnetlib.py     Tue Mar  1 01:41:10 
 2011
 @@ -17,9 +17,10 @@
         conn, addr = serv.accept()
     except socket.timeout:
         pass
 +    else:
 +        conn.close()
     finally:
         serv.close()
 -        conn.close()
         evt.set()

  class GeneralTests(TestCase):
 ___
 Python-checkins mailing list
 python-check...@python.org
 http://mail.python.org/mailman/listinfo/python-checkins

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is Demo directory removed from python3.2 ?

2011-02-21 Thread Jack Diederich
On Mon, Feb 21, 2011 at 10:02 PM, wen heping wenhep...@gmail.com wrote:
 Hi,

   I found 2 changes in python-3.2 compared to previous python version:
   i) Demo directory removed
   ii) lib/libpython3.2.so.1  changed to lib/libpython3.2mu.so.1

   Would someone tell me why ?

The demo directory was largely out of date (some of it by a decade).
Most of what was in it plain didn't work or was an outdated example of
how you should do things.  The good stuff was moved into the
documentation or the standard library.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] closing files and sockets in a timely manner in the stdlib

2010-10-30 Thread Jack Diederich
On Fri, Oct 29, 2010 at 8:35 PM, Brett Cannon br...@python.org wrote:
 For those of you who have not noticed, Antoine committed a patch that
 raises a ResourceWarning under a pydebug build if a file or socket is
 closed through garbage collection instead of being explicitly closed.

Just yesterday I discovered /proc/your PID here/fd/ which is a list
of open file descriptors for your PID on *nix and includes all open
files, pipes, and sockets.  Very handy, I filed some tickets about
company internal libs that were opening file handles as a side effect
of import (logging mostly).  I tried to provoke standard python
imports (non-test) to leave some open handles and came up empty.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] closing files and sockets in a timely manner in the stdlib

2010-10-30 Thread Jack Diederich
On Sat, Oct 30, 2010 at 3:06 PM, Glyph Lefkowitz
gl...@twistedmatrix.com wrote:

 On Oct 30, 2010, at 2:39 PM, Jack Diederich wrote:

 On Fri, Oct 29, 2010 at 8:35 PM, Brett Cannon br...@python.org wrote:

 For those of you who have not noticed, Antoine committed a patch that

 raises a ResourceWarning under a pydebug build if a file or socket is

 closed through garbage collection instead of being explicitly closed.

 Just yesterday I discovered /proc/your PID here/fd/ which is a list
 of open file descriptors for your PID on *nix and includes all open
 files, pipes, and sockets.  Very handy, I filed some tickets about
 company internal libs that were opening file handles as a side effect
 of import (logging mostly).  I tried to provoke standard python
 imports (non-test) to leave some open handles and came up empty.

 That path (and anything below /proc, really) is a list of open file
 descriptors specifically on Linux, not *nix.  Also on linux, you can avoid
 your pid here by just doing /proc/self.

I was happy to find out that the /proc system came from Plan9 because
I always thought Plan9 was dead water.  But in this particular case
Plan9 outdid System7 in the the realm of everything is a file by
making everything a file.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] API for binary operations on Sets

2010-09-29 Thread Jack Diederich
I will say something snarky now and (hopefully) something useful tomorrow.

When ABCs went in I was +0 because, like annotations, I was told I
wouldn't have to care about them.  That said; I do actually care about
the set interface and what set-y-ness means for regular duck typing
reasons.  What people expect sets to do is what sets-alikes should do.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker BD Was:Goodbye

2010-09-22 Thread Jack Diederich
On Wed, Sep 22, 2010 at 11:46 PM, Raymond Hettinger
raymond.hettin...@gmail.com wrote:

 On Sep 22, 2010, at 6:24 PM, R. David Murray wrote:

 On Wed, 22 Sep 2010 19:18:35 -0400, Terry Reedy tjre...@udel.edu wrote:

 deputed tracker authority/ies. Not everyone has the same idea about how
 to handle the various fields and processes. Who decides in cases of
 disagreement?

 We discussed this a while back and I don't think we really have a tracker
 BD.  Brett and Martin come closest, but mostly we just sort of evolve
 a rough consensus.

 IMO, Benjamin and Antoine are the closest.  They devote a substantial
 portion of their lives to Python and have been our most active
 contributors in the last year.   They read almost every tracker post,
 read every check-in, and continuously monitor the IRC channel.

Off topic-er.  Does anyone have scripts that pull data on how many
committers commit or how many trac admins admin?  I'm not asking for
punitive reasons - I'd be the first against the wall - but I wouldn't
mind graphing it.  Power law, methinks.  With big, confounding, and
jumbley spikes in the Spring for PyCon.

Likewise for mailing list subscriptions.  Personally I've gone back
and forth between subscribing to everything (-list -dev -commits -bugs
-ideas, et al) and subscribing to almost nothing.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Goodbye

2010-09-21 Thread Jack Diederich
On Tue, Sep 21, 2010 at 7:58 PM, Mark Lawrence breamore...@yahoo.co.uk wrote:
 I'm rather sad to have been sacked, but such is life.  I won't be doing any
 more work on the bug tracker for obvious reasons, but hope that you who have
 managed to keep your voluntary jobs manage to keep Python going.

Umm, what?  You mean http://bugs.python.org/issue2180  ?

 Mark, please stop closing these based on age.
The needs to be a determination whether this
is a valid bug.  If so, then a patch is needed.
If not, it can be closed.

Am I missing something?

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Thoughts fresh after EuroPython

2010-07-25 Thread Jack Diederich
On Sun, Jul 25, 2010 at 2:26 PM, Jesse Noller jnol...@gmail.com wrote:
 On Sat, Jul 24, 2010 at 10:08 AM, Guido van Rossum gu...@python.org wrote:
 - After seeing Raymond's talk about monocle (search for it on PyPI) I
 am getting excited again about PEP 380 (yield from, return values from
 generators). Having read the PEP on the plane back home I didn't see
 anything wrong with it, so it could just be accepted in its current
 form. Implementation will still have to wait for Python 3.3 because of
 the moratorium. (Although I wouldn't mind making an exception to get
 it into 3.2.)

 I, like others, want PEP 380 to be in and done (it's exciting!).
 However, we knew going into the moratorium that it would negatively
 affect PEP 380 - as a co-author, it was one of the few things which
 made me second-guess the implementation of the moratorium. So; in this
 case I'd have to vote no, we knew going in it would do this.

I was/am pro PEP 380 and pro Moratorium.  We knew going in to the
moratorium that PEP 380 wouldn't be included and talked about it
extensively.  We should honor that now for the same reasons we talked
about then: declaring no syntax changes allows for a focus on the
stdlib.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Peculiar import code in pickle.py

2010-07-13 Thread Jack Diederich
On Tue, Jul 13, 2010 at 1:57 PM, Benjamin Peterson benja...@python.org wrote:
 2010/7/13 Alexander Belopolsky alexander.belopol...@gmail.com:
 On Tue, Jul 13, 2010 at 11:34 AM, Antoine Pitrou solip...@pitrou.net wrote:
 On Tue, 13 Jul 2010 11:25:23 -0400
 ..
 Only for top-level modules:

 __import__(distutils.core, level=0)
 module 'distutils' from
 '/home/antoine/py3k/__svn__/Lib/distutils/__init__.py'
 sys.modules[distutils.core]
 module 'distutils.core' from
 '/home/antoine/py3k/__svn__/Lib/distutils/core.py'

 That's right, but I believe the recommended way to achieve that
 behavior is to supply a dummy fromlist:

 __import__(distutils.core, fromlist=[dummy], level=0)
 module 'distutils.core' from
 '/Users/sasha/Work/python-svn/py3k/Lib/distutils/core.py'

 No! That's not recommended and a complete hack. The dance or
 importlib.import_module is preferred.

A complete hack with a long pedigree:
  module = __import__(modname, None, None, 'python2.4 is silly,
revisit this line in 2.5')

I think that line in a code base of mine didn't get altered until 2.6.something.

Hack-ily,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Patch to telnetlib.py

2010-03-13 Thread Jack Diederich
On Sat, Mar 13, 2010 at 12:24 PM, gregory dudek du...@cim.mcgill.ca wrote:
 The Telnet module telnetlib.py can be
 very slow -- unusably slow -- for large automated data transfers.  There are 
 typically done in raw mode.

 The attached patch greatly increased the speed of telnet interactions in raw 
 mode.  I submitted this a couple of year ago, but it was for an older branch 
 of python.

 There are 2 key things being done:
  1) concatenations string with string.join instead of '+'  (which is probably 
 a minor issue)
  2) wholesale appending the raw and processed buffers when the IAC character 
 is not found.  The should be examined
     carefully since I am not an expert in the Telnet protocol, but it seems 
 to work very well giving me a 5x speedup.

As others mentioned, please post the bug to the tracker.  Also, please
assign the patch to me (jackdied) and mention the previous bug
number - I thought I had reviewed all the telnetlib bugs including
those that were closed WONTFIX.

Thanks for the kick in the pants, I have a whole new inner loop for
data processing but I haven't applied it.  I've been adding unit tests
to the module so I could be sure I wouldn't break anything but never
finished the job.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rich Comparison recipe wrong?

2010-01-26 Thread Jack Diederich
On Mon, Jan 25, 2010 at 6:59 AM, Lennart Regebro lrege...@jarn.com wrote:
[snip]
 If class A returns NotImplemented when compared to class B, and class
 B implements the recipe above, then we get infinite recursion, because

 1. A()  B() will call A.__lt__(B) which will return NotImplemented.
 2. which will mean that Python calls B.__ge__(A)
 3. Which B implements by doing A  B
 4. Start over at 1.

A small correction;  For the purposes of NotImplemented the opposite
of __lt__ is __gt__ because if A  B then B  A.  The pairs are ('==',
'!='), ('', ''), ('=', '=').

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3146: Merge Unladen Swallow into CPython

2010-01-20 Thread Jack Diederich
On Wed, Jan 20, 2010 at 5:27 PM, Collin Winter collinwin...@google.com wrote:
[big snip]
 In order to support hardware and software platforms where LLVM's JIT does not
 work, Unladen Swallow provides a ``./configure --without-llvm`` option. This
 flag carves out any part of Unladen Swallow that depends on LLVM, yielding a
 Python binary that works and passes its tests, but has no performance
 advantages. This configuration is recommended for hardware unsupported by 
 LLVM,
 or systems that care more about memory usage than performance.

Does disabling the LLVM change binary compatibility between modules
targeted at the same version?  At tonight's Boston PIG we had some
binary package maintainers but most people (including myself) only
cared about source compatibility.I assume linux distros care about
binary compatibility _a lot_.

[snip]
 Managing LLVM Releases, C++ API Changes
 ---
 LLVM is released regularly every six months. This means that LLVM may be
 released two or three times during the course of development of a CPython 3.x
 release. Each LLVM release brings newer and more powerful optimizations,
 improved platform support and more sophisticated code generation.

I don't think this will be a problem in practice as long as the
current rules hold - namely that if someone has already committed a
patch that patch wins unless the later commit is clearly better.  That
puts the onus on people working out-of-sight to incorporate the public
mainline.  I'm sure many internal googler's (and Ubuntu'ers, and
whomever's) patches have already been developed on that timeline and
were integrated into the core without remark or incident.

[snip]
 Open Issues
 ===

 - *Code review policy for the ``py3k-jit`` branch.* How does the CPython
  community want us to procede with respect to checkins on the ``py3k-jit``
  branch? Pre-commit reviews? Post-commit reviews?

  Unladen Swallow has enforced pre-commit reviews in our trunk, but we realize
  this may lead to long review/checkin cycles in a purely-volunteer
  organization. We would like a non-Google-affiliated member of the CPython
  development team to review our work for correctness and compatibility, but we
  realize this may not be possible for every commit.

As above, I don't think this will be a problem in practice -- how
often do two people work on the same part of the core?  So long as the
current firstest with with mostest practice holds for public commits
it doesn't matter what googlers do in private.

I like it,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] E3 BEFEHLE

2010-01-16 Thread Jack Diederich
Good lord, did this make it past other people's spam filters too?  I
especially liked the reference to REGION -2,0 ; Rlyeh.  Ph'nglui
mglw'nafh Cthulhu R'lyeh wgah'nagl fhtagn to you too sir.

-Jack

2010/1/16 Christian Heimes li...@cheimes.de:
 ERESSEA Lord evzp24
 ; TIMESTAMP 1263675732032
 ; Magellan Version 2.0.4 (build 361)
 ; ECHECK -r100 -s -l -w4 -v4.3.2
 LOCALE de
 REGION -2,0 ; Rlyeh
 ; ECheck Lohn 11
 EINHEIT umwc;           Pyrit Eisenbeisser [2,0$]
 // Zwerg: Bergmann
 LERNEN Bergbau
 EINHEIT 4fh4;           Beryllium [2,771$]
 // Halbing: Burgenbau
 LERNEN Armbrustschießen
 EINHEIT zqpv;           Petalit [2,0$]
 // Waffenschmied
 LERNEN Waffenbau
 EINHEIT dt23;           Selenit [1,0$]
 // Halbling: Bergmann
 LERNEN Bergbau
 EINHEIT v5ze;           Steinmetz [2,0$]
 LERNEN Steinbau
 EINHEIT q3vw;           Eisenschürfer [2,0$]
 LERNEN Bergbau
 EINHEIT eph9;           Sediment [3,0$,U250]
 // Zwerg: Steinbau
 MACHEN Stein
 GIB 4kae 36 Stein
 EINHEIT mmhx;           Pferde [2,0$]
 // Pferde
 MACHEN Pferde
 GIB 4kae 11 Pferd
 EINHEIT sm8c;           Transportling [3,0$]
 LERNEN Reiten
 GIB 4kae 3 Pferd
 EINHEIT 4kae;           Wagner [7,0$]
 NACH NW NO
 GIB 4fh4 ALLES Holz
 REGION -3,2 ; Hochplateau von Leng
 ; ECheck Lohn 11
 EINHEIT eqqm;           Schuetzling [4,2059$]
 LERNEN Armbrustschießen
 EINHEIT h8yz;           Axtwache [5,0$]
 // Zwerg: Hiebwaffen
 LERNEN Ausdauer
 EINHEIT c1xk;           Schwertlinge [9,0$]
 // Halbling Schwertkämpfer
 LERNEN Hiebwaffen
 EINHEIT es76;           Maurer [6,0$]
 LERNEN Burgenbau
 EINHEIT cmhv;           Steinsucher [1,0$]
 NACH W
 EINHEIT nzmg;           Eisensucher [1,0$]
 LERNEN Bergbau
 EINHEIT zwfv;           Schützlinge [11,0$]
 LERNEN Armbrustschießen
 REGION -4,2 ; Trockental
 ; ECheck Lohn 11
 EINHEIT ovyv;           Eisenfinder [1,522$]
 // Halbling: besetze Wueste
 LERNEN Burgenbau
 EINHEIT nzqL;           Steinfresser [1,0$]
 LERNEN Steinbau
 REGION -4,1 ; Eryndyn
 ; ECheck Lohn 11
 EINHEIT geoz;           Geode Lapislazuli [1,736$]
 // Zwerg: Magier
 ZAUBERE Stufe 1 Segen der Erde
 EINHEIT 6p3f;           Axtmeister [5,0$]
 LERNEN Ausdauer
 EINHEIT ph9e;           Holzfäller [3,0$]
 // Halblinge: Holzfaeller
 LERNEN Holzfällen
 EINHEIT 4uxp;           Schuetzling [6,0$]
 // Halbling Armbrust
 LERNEN Armbrustschießen
 EINHEIT by3t;           Kyanit [4,0$]
 // Halblinge: Holzfaeller
 LERNEN Holzfällen
 EINHEIT jo9p;           Steinsucher [1,0$]
 LERNEN Steinbau
 EINHEIT btd9;           Maurer [4,0$,U250]
 VERLASSEN
 GIB 7i43 ALLES Stein
 GIB 7i43 ALLES Eisen
 GIB 456o 5 Holz
 NACH O
 EINHEIT 456o;           Zimmermeister [2,0$]
 LERNEN Holzfällen
 EINHEIT sven;           Sven Handelson [1,0$]
 LERNEN Wagenbau
 EINHEIT 7i43;           Maurer [8,34$]
 MACHEN Burg 16pi
 REGION -3,1 ; Wald von Arden
 ; ECheck Lohn 12
 EINHEIT geoa;           Geode Amethyst [1,0$]
 // Zauberer
 ZAUBERE Stufe 1 Segen der Erde
 @RESERVIEREN 20 Silber
 EINHEIT y574;           Niob [5,1775$]
 // Zwerg: Waffenbau
 LERNEN Waffenbau
 EINHEIT 8gs0;           Werfer [1,0$]
 // Zwerg: Katapult, Hiebwaffen
 LERNEN Katapultbedienung
 EINHEIT apq6;           Werfer [1,0$]
 LERNEN Katapultbedienung
 EINHEIT u699;           Wagner [2,0$]
 LERNEN Wagenbau
 EINHEIT mq0v;           Meisterschmied [2,0$]
 LERNEN Rüstungsbau
 EINHEIT gLyd;           Kies [3,0$]
 // Halbling: Armbrust
 LERNEN Armbrustschießen
 EINHEIT L18v;           Pikiener [4,0$]
 LERNEN Stangenwaffen
 EINHEIT xLw5;           Schuetzling [4,0$]
 LERNEN Armbrustschießen
 // Armbrustschuetzen
 EINHEIT nyv5;           Axtmeister [2,0$]
 // Axtheld
 LERNEN Ausdauer
 EINHEIT dpb6;           Kämpfer [9,0$]
 LERNEN Hiebwaffen
 EINHEIT kseh;           Transportling [2,0$]
 LERNEN Reiten
 GIB mq0v ALLES Stein
 NACH SO
 REGION -4,3 ; Ebene von Yuggoth
 ; ECheck Lohn 11
 EINHEIT efzf;           Selenit [14,0$]
 // Halbling: Stangenwaffen
 LERNEN Stangenwaffen
 EINHEIT dgji;           Baryl [2,719$]
 // Burg + Armbrust
 LERNEN Armbrustschießen
 EINHEIT geom;           Geode Malachit [1,10$]
 // Zauberer
 ZAUBERE Stufe 1 Segen der Erde
 @RESERVIEREN 20 Silber
 EINHEIT e5cb;           Zimmerling [8,0$]
 LERNEN Schiffbau
 EINHEIT jvyx;           Maurer [3,0$]
 LERNEN Burgenbau
 EINHEIT y3jj;           Dressur [2,0$]
 // Halbling Dressur
 LERNEN Reiten
 EINHEIT qwkc;           Axtwache [3,0$]
 LERNEN Ausdauer
 EINHEIT 4sjc;           Steinsucher [1,0$]
 LERNEN Steinbau
 EINHEIT bago;           Äxtkämpfer [8,0$]
 LERNEN Hiebwaffen
 EINHEIT i25v;           Kämpfer [9,0$]
 LERNEN Hiebwaffen
 EINHEIT uuny;           Seefahrer [2,0$]
 LERNEN Segeln
 EINHEIT 6ss2;           Schütze [9,0$]
 LERNEN Armbrustschießen
 REGION -4,12 ; Gertal
 ; ECheck Lohn 11
 EINHEIT uxjb;           Pionier [1,64$]
 // Zwerg: Steinbau
 MACHEN Stein
 EINHEIT vvfb;           Pioniere [2,120$]
 // Halbling: Burgenbau / Holz
 MACHEN Burg 79ae
 REGION -4,10 ; Gapoced
 ; ECheck Lohn 11
 EINHEIT 2mkk;           Steinmetz [2,498$]
 MACHEN Wache zzyu
 GIB rirv 60 

Re: [Python-Dev] [RELEASED] Python 2.7 alpha 2

2010-01-11 Thread Jack Diederich
On Mon, Jan 11, 2010 at 7:11 PM, Barry Warsaw ba...@python.org wrote:
 As an example, the one library I've already ported used a metaclass.  I don't
 see any way to specify that the metaclass should be used in a portable way.
 In Python 2.6 it's:

 class Foo:
    __metaclass__ = Meta

 and in Python 3 it's:

 class Foo(metaclass=Meta):

 2to3 made that pain go away.

[sidebar]
1) the metaclass fixer was a PITA to implement.
2) 95% of __metaclass__ definitions searchable via google code were of
the __metaclass__ = type variety.  The 2to3 patch exists only
because of the few other uses.
3) 100% of the module level assignments in public projects were the
__metaclass__ = type variety which is why there isn't a fixer for
that.  Also, a fixer would have been really, really ugly (munge every
class definition in this module because there is a top level
assignment).

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3003 - Python Language Moratorium

2009-11-03 Thread Jack Diederich
+1.  There are no compelling language changes on the horizon (yield
from is nice but not necessary).  I see the main benefit of a
moratorium as social rather than technical by encouraging people to
work on the lib instead of the language.  Plus, I'd gladly proxy my
vote to any one of the three PEP authors so 3/3 is a no-brainer.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Needing help to change the grammar

2009-04-11 Thread Jack diederich
On Fri, Apr 10, 2009 at 9:58 PM, Harry (Thiago Leucz Astrizi)
thiagoha...@riseup.net wrote:

 Hello everybody. My name is Thiago and currently I'm working as a
 teacher in a high school in Brazil. I have plans to offer in the
 school a programming course to the students, but I had some problems
 to find a good langüage. As a Python programmer, I really like the
 language's syntax and I think that Python is very good to teach
 programming. But there's a little problem: the commands and keywords
 are in english and this can be an obstacle to the teenagers that could
 enter in the course.

 Because of this, I decided to create a Python version with keywords in
 portuguese and with some modifications in the grammar to be more
 portuguese-like. To this, I'm using Python 3.0.1 source code.

I love the idea (and most recently edited PEP 306) so here are a few
suggestions;

Brazil has many python programmers so you might be able to make quick
progress by asking them for volunteer time.

To bug-hunt your technical problem: try switching the not is
operator to include an underscore not_is.  The python LL(1) grammar
checker works for python but isn't robust, and does miss some grammar
ambiguities.  Making the operator a single word might reveal a bug in
the parser.

Please consider switching your students to 'real' python part way
through the course.  If they want to use the vast amount of python
code on the internet as examples they will need to know the few
English keywords.

Also - most python core developers are not native English speakers and
do OK :)  PyCon speakers are about 25% non-native English speakers and
EuroPython speakers are about the reverse (my rough estimate - I'd
love to see some hard numbers).

Keep up the Good Work,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Google Summer of Code/core Python projects - RFC

2009-04-10 Thread Jack diederich
On Fri, Apr 10, 2009 at 4:38 PM, C. Titus Brown c...@msu.edu wrote:
[megasnip]
 roundup VCS integration / build tools to support core development --
        a single student proposed both of these and has received some
        support.  See http://slexy.org/view/s2pFgWxufI for details.

From the listed webpage I have no idea what he is promising (a
combination of very high level and very low level tasks).  If he is
offering all the same magic for Hg that Trac does for SVN (autolinking
r2001 text to patches, for example) then I'm +1.  That should be
cake even for a student project.

He says vague things about patches too, but I'm not sure what.  If he
wanted to make that into a 'patchbot' that just applied every patch in
isolation and ran 'make  make test' and posted results in the
tracker I'd be a happy camper.

But maybe those are goals for next year, because I'm not quite sure
what the proposal is.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] decorator module in stdlib?

2009-04-08 Thread Jack diederich
On Wed, Apr 8, 2009 at 12:09 AM, Michele Simionato
michele.simion...@gmail.com wrote:
 On Tue, Apr 7, 2009 at 11:04 PM, Terry Reedy tjre...@udel.edu wrote:

 This probably should have gone to the python-ideas list.  In any case, I
 think it needs to start with a clear offer from Michele (directly or relayed
 by you) to contribute it to the PSF with the usual conditions.

 I have no problem to contribute the module to the PSF and to maintain it.
 I would just prefer to have the ability to change the function signature in
 the core language rather than include in the standard library a clever hack.

Flipping Michele's commit bit (if he wants it) is overdue.  A quick
google doesn't show he refused it in the past, but the same search
shows the things things he did do - including the explication of MRO
in 2.3 (http://www.python.org/download/releases/2.3/mro/).  Plus he's
a softie for decorators, as am I.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] slightly inconsistent set/list pop behaviour

2009-04-08 Thread Jack diederich
On Wed, Apr 8, 2009 at 2:44 AM, Mark Dickinson dicki...@gmail.com wrote:
 On Wed, Apr 8, 2009 at 7:13 AM, John Barham jbar...@gmail.com wrote:
 If you play around a bit it becomes clear that what set.pop() returns
 is independent of the insertion order:

 It might look like that, but I don't think this is
 true in general (at least, with the current implementation):

 foo = set([1, 65537])
 foo.pop()
 1
 foo = set([65537, 1])
 foo.pop()
 65537

You wrote a program to find the two smallest ints that would have a
hash collision in the CPython set implementation?  I'm impressed.  And
by impressed I mean frightened.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Getting information out of the buildbots

2009-04-06 Thread Jack diederich
I committed some new telnetlib tests yesterday to the trunk and I can
see they are failing on Neal's setup but not what the failures are.
Ideally I like to get the information out of the buildbots but they
all seem to be hanging on stdio tests and quiting out.

Ideas?  TIA,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tools

2009-04-05 Thread Jack diederich
On Sun, Apr 5, 2009 at 10:50 PM,  s...@pobox.com wrote:
    Barry Someone asked me at Pycon about stripping out Demos and Tools.

    Matthias +1, but please for 2.7 and 3.1 only.

 Is there a list of other demos or tools which should be deleted?  If
 possible the list should be publicized so that people can pick up external
 maintenance if desired.

I liked Brett's (Georg's?) half joking idea at sprints.  Just delete
each subdirectory in a separate commit and then wait to see what
people revert.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] OSError.errno = exception hierarchy?

2009-04-02 Thread Jack diederich
On Thu, Apr 2, 2009 at 4:25 PM, Benjamin Peterson benja...@python.org wrote:
 2009/4/2 Gustavo Carneiro gjcarne...@gmail.com:
 Apologies if this has already been discussed.

 I don't believe it has ever been discussed to be implemented.

 Apparently no one has bothered yet to turn OSError + errno into a hierarchy
 of OSError subclasses, as it should.  What's the problem, no will to do it,
 or no manpower?

 Python doesn't need any more builtin exceptions to clutter the
 namespace. Besides, what's wrong with just checking the errno?

The problem is manpower (this has been no ones itch).  In order to
have a hierarchy of OSError exceptions the underlying code would have
to raise them.  That means diving into all the C code that raises
OSError and cleaning them up.

I'm +1 on the idea but -1 on doing the work myself.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Not backporting PEP 3115 (metaclass __prepare__)

2008-03-18 Thread Jack Diederich
We can't backport the __prepare__ syntax without requiring metaclass
definition on the 'class' line.  Because the __metaclass__ definition
can be at the end of the class in 2.6 we can't find it until after we
execute the class and that is too late to use a custom dictionary.

I wish I had thought of that yesterday,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Need Survey Answers from Core Developers

2007-05-18 Thread Jack Diederich
On Fri, May 18, 2007 at 10:23:46AM -0500, Jeff Rush wrote:
 Time is short and I'm still looking for answers to some questions about
 cPython, so that it makes a good showing in the Forrester survey.
 
[snip]
 
 4) How many committers to the cPython core are there?
 
I don't have the necessary access to the pydotorg infrastructure
to answer this -- can someone who does help me out here?

http://www.python.org/dev/committers
If the last modified date can be trusted there are currently 77 committers.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Contents of test_bool

2007-03-21 Thread Jack Diederich
On Wed, Mar 21, 2007 at 03:37:02PM -0500, Collin Winter wrote:
 Is there any reason for test_bool to contain assertions like these?
 
 self.assertIs({}.has_key(1), False)
 self.assertIs({1:1}.has_key(1), True)
 
 A significant portion of the file is devoted to making sure various
 things return bools (isinstance, operator.*) or handle bools correctly
 (pickle, marshal). Since these don't test the functionality of the
 bool type, is there a reason not to move these tests to more
 appropriate test files (eg, test_pickle) or removing them altogether
 (if they duplicate existing tests)?
 
 I've started on this somewhat, but I thought I'd ask before I spent
 too much time on it.

Most of them could be moved to their specific type's test module.
There are a few (at least on the py3k branch) tests that check if
__bool__ functions really return bools and that the proper exceptions
are raised.  Those should stay in test_bool.py

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-3000] Warning for 2.6 and greater

2007-01-10 Thread Jack Diederich
On Wed, Jan 10, 2007 at 06:04:05PM -0800, Raymond Hettinger wrote:
 [Anthony Baxter]
  I've had a number of people say that this is something they would 
  really, really like to see - the idea is both to let people migrate 
  more easily, and provide reassurance that it won't be that bad to 
  migrate!
 
 If Py3.0 is going to come out before Py2.6, can we table the discussion
 until then?  We may find that a) migration was easier than we thought,
 b) that stand-alone migration tools are sufficient, or c) by the time
 Py2.6 comes-out, no one cares about having 2.x vs 3.x warnings.
 OTOH, if people do care, then we'll have a strong case for loading
 these warnings into Py2.6 before it gets close to being final.

I'm also a fan of not scratching something until it itches but if
someone else already feels the itch and wants to do the work +0.
The pro warnings camp has said it won't add interpreter overhead unless
you ask for it (and they are willing to test that it is so).

 Also, I'm wondering if the desire for 2.6 warnings is based on the notion 
 that 
 it will be possible to get large tools to work under both Py2.x and Py3.x.
 With all the module renaming/packaging, old-style classes disappearing,
 encoded text objects, real division and whatnot; that notion may be
 a pipe-dream.

No one has seriously suggested that it would be easy or if you prefer
no one serious has suggested it would be easy ;)

 As far as reassurance that it won't be that bad to migrate, screens full
 of warnings may be less than reassuring.

If folks want to put in the effort (and people heavier than me have 
offered) to support light-weight optional warnings in addition to the
2to3 tool I can't complain.  It seems redundant to me but their time isn't
mine.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_itertools fails for trunk on x86 OS X machine

2006-09-21 Thread Jack Diederich
The python binary is out of step with the test_itertools.py version.
You can generate this same error on your own box by reverting the
change to itertoolsmodule.c but leaving the new test in test_itertools.py

I don't know why this only happened on that OSX buildslave

On Thu, Sep 21, 2006 at 02:34:40PM -0700, Grig Gheorghiu wrote:
 One of the Pybots buildslaves has been failing the 'test' step, with
 the culprit being test_itertools:
 
 test_itertools
 test test_itertools failed -- Traceback (most recent call last):
   File
 /Users/builder/pybots/pybot/trunk.osaf-x86/build/Lib/test/test_itertools.py,
 line 62, in test_count
 self.assertEqual(repr(c), 'count(-9)')
 AssertionError: 'count(4294967287)' != 'count(-9)'
 
 This started to happen after
 http://svn.python.org/view?rev=51950view=rev.
 
 The complete log for the test step on that buildslave is here:
 
 http://www.python.org/dev/buildbot/community/all/x86%20OSX%20trunk/builds/19/step-test/0
 
 Grig
 
 
 -- 
 http://agiletesting.blogspot.com
 ___
 Python-Dev mailing list
 Python-Dev@python.org
 http://mail.python.org/mailman/listinfo/python-dev
 Unsubscribe: 
 http://mail.python.org/mailman/options/python-dev/jackdied%40jackdied.com
 
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_itertools fails for trunk on x86 OS X machine

2006-09-21 Thread Jack Diederich
On Thu, Sep 21, 2006 at 03:28:04PM -0700, Grig Gheorghiu wrote:
 On 9/21/06, Jack Diederich [EMAIL PROTECTED] wrote:
  The python binary is out of step with the test_itertools.py version.
  You can generate this same error on your own box by reverting the
  change to itertoolsmodule.c but leaving the new test in test_itertools.py
 
  I don't know why this only happened on that OSX buildslave
 
 Not sure what you mean by out of step. The binary was built out of the
 very latest itertoolsmodule.c, and test_itertools.py was also updated
 from svn. So they're both in sync IMO. That tests passes successfully
 on all the other buildslaves in the Pybots farm (x86 Ubuntu, Debian,
 Gentoo, RH9, AMD-64 Ubuntu)
 

When I saw the failure, first I cursed (a lot).  Then I followed the repr
all the way down into stringobject.c, no dice.  Then I noticed that the
failure is exactly what you get if the test was updated but the old
module wasn't.

Faced with the choice of believing in a really strange platform specific 
bug in a commonly used routine that resulted in exactly the failure caused 
by one of the two files being updated or believing a failure occurred in the
long chain of networks, disks, file systems, build tools, and operating 
systems that would result in only one of the files being updated -
I went with the latter.

I'll continue in my belief until my dying day or until someone with OSX
confirms it is a bug, whichever comes first.

not-gonna-sweat-it-ly,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_itertools fails for trunk on x86 OS X machine

2006-09-21 Thread Jack Diederich
On Fri, Sep 22, 2006 at 06:09:41AM +0200, Martin v. L?wis wrote:
 Jack Diederich schrieb:
  Faced with the choice of believing in a really strange platform specific 
  bug in a commonly used routine that resulted in exactly the failure caused 
  by one of the two files being updated or believing a failure occurred in the
  long chain of networks, disks, file systems, build tools, and operating 
  systems that would result in only one of the files being updated -
  I went with the latter.
 
 Please reconsider how subversion works. It has the notion of atomic
 commits, so you either get the entire change, or none at all.
 
 Fortunately, the buildbot keeps logs of everything it does:
 
 http://www.python.org/dev/buildbot/trunk/g4%20osx.4%20trunk/builds/1449/step-svn/0
 
 shows
 
 ULib/test/test_itertools.py
 UModules/itertoolsmodule.c
 Updated to revision 51950.
 
 So it said it updated both files. But perhaps it didn't build them?
 Let's check:
 
 
 http://www.python.org/dev/buildbot/trunk/g4%20osx.4%20trunk/builds/1449/step-compile/0
 
 has this:
 
 building 'itertools' extension
 
 gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp
 -mno-fused-madd -g -Wall -Wstrict-prototypes -I.
 -I/Users/buildslave/bb/trunk.psf-g4/build/./Include
 -I/Users/buildslave/bb/trunk.psf-g4/build/./Mac/Include -I./Include -I.
 -I/usr/local/include -I/Users/buildslave/bb/trunk.psf-g4/build/Include
 -I/Users/buildslave/bb/trunk.psf-g4/build -c
 /Users/buildslave/bb/trunk.psf-g4/build/Modules/itertoolsmodule.c -o
 build/temp.macosx-10.3-ppc-2.6/Users/buildslave/bb/trunk.psf-g4/build/Modules/itertoolsmodule.o
 
 gcc -bundle -undefined dynamic_lookup
 build/temp.macosx-10.3-ppc-2.6/Users/buildslave/bb/trunk.psf-g4/build/Modules/itertoolsmodule.o
 -L/usr/local/lib -o build/lib.macosx-10.3-ppc-2.6/itertools.so
 
 So itertools.so is regenerated, as it should; qed.
 

I should leave the tounge-in-cheek bombast to Tim and Frederik, especially
when dealing with what might be an OS  machine specific bug.  The next
checkin and re-test will or won't highlight a failure and certainly someone 
with a g4 will try it out before 2.5.1 goes out so we'll know if it was a 
fluke soonish. The original error was mine, I typed Size_t instead of 
Ssize_t and while my one-char patch might also be wrong (I hope not, I'm 
red-faced enough as is) we should find out soon enough.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] os.spawnlp() missing on Windows in 2.4?

2006-08-19 Thread Jack Diederich
On Sat, Aug 19, 2006 at 04:34:16AM +0100, Steve Holden wrote:
 Scott Dial wrote:
  Guido van Rossum wrote:
  
 I just got a report from a Windows user that os.spawnlp() is missing
 from Python 2.4, despite being mentioned in the docs. Can someone
 confirm this? My Windows box is resting. :-)
 
  
  
  Availability: Unix, Windows. spawnlp(), spawnlpe(), spawnvp() and 
  spawnvpe() are not available on Windows. New in version 1.6.
  
  One could argue that it presented poorly, but it reads completely 
  correct. Alternatively one could says The 'p' variants are unavailable 
  on Windows. but that would be assuming someone understand there was a 
  scheme to the names :-)
  
 How about:
 
 Availability: Unix; Windows PARTIAL (spawnlp(), spawnlpe(), spawnvp() 
 and spawnvpe() are not implemented). New in version 1.6
 
Or

*Availability: Unix: All, Windows: spawnl(), spawnle(), spawnv(), 
spawnve() only.  New in version 1.6

Might as well positively list the half that is there instead of the half
that isn't.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] ctypes and win64

2006-08-19 Thread Jack Diederich
On Sat, Aug 19, 2006 at 05:19:40AM -0400, Tim Peters wrote:
 [Steve Holden]
  Reasonable enough, but I suspect that Thomas' suggestion might save us
  from raising false hopes. I'd suggest that the final release
  announcement point out that this is the first release containing
  specific support for 64-bit architectures (if indeed it is)
 
 [Martin v. L?wis]
  It isn't. Python ran on 64-bit Alpha for nearly a decade now (I guess),
  and was released for Win64 throughout Python 2.4. ActiveState has
  been releasing an AMD64 package for some time now.
 
 Python has also been used on 64-bit Crays, and I actually did the
 first 64-bit port in 1993 (to a KSR Unix machine -- took less than a
 day to get it running fine!  Guido's an excellent C coder.).  Win64 is
 the first (and probably only forever more) where sizeof(long) 
 sizeof(void*), and that caused some Win64-unique pain, and may cause
 more.
 
 BTW, at least two of the people at the NFS sprint earlier this year
 were compiling and running Python on Win64 laptops.  It's solid
 enough, and surely nobody expects that Win64 users expect 100%
 perfection of anything they run 0.5 wink.

It has always just worked for me on Opterons + Debian.
Python 2.4 (#1, May 31 2005, 10:19:45) 
[GCC 3.3.5 (Debian 1:3.3.5-12)] on linux2
Type help, copyright, credits or license for more information.
 import sys
 sys.maxint
9223372036854775807

Thanks to the Alphas for making AMD64 on *nix a no-brainer,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Python-checkins] TRUNK IS UNFROZEN, available for 2.6 work if you are so inclined

2006-08-17 Thread Jack Diederich
On Thu, Aug 17, 2006 at 09:07:53PM +0200, Georg Brandl wrote:
 Jack Diederich wrote:
 
  Looks good to me.  While you are on that page do you want to change
  
  l = PyList_New(3);
  x = PyInt_FromLong(1L);
  PySequence_SetItem(l, 0, x); Py_DECREF(x);
  x = PyInt_FromLong(2L);
  PySequence_SetItem(l, 1, x); Py_DECREF(x);
  x = PyString_FromString(three);
  PySequence_SetItem(l, 2, x); Py_DECREF(x);
  
  to
  
  l = PyList_New(3);
  x = PyInt_FromLong(1L);
  PyList_SetItem(l, 0, x);
  x = PyInt_FromLong(2L);
  PyList_SetItem(l, 1, x);
  x = PyString_FromString(three);
  PyList_SetItem(l, 2, x);
  
  The example code causes segfaults and probably always has (at last to 2.2)
 
 Interesting! From a naive POV, the docs' example is quite right.
 
 The segfault occurs because list_ass_item Py_DECREFs the old item (which
 segfaults because the old items are NULL in a newly created list).
 PyList_SetItem does a Py_XDECREF.
 
 The docs to PyList_New, however, do not explicitly say that the new list
 must only be filled using PyList_SetItem.
 
 So please, someone, decide what's broken here!

The docs, this is from a thread yesterday and today on c.l.py

http://groups.google.com/group/comp.lang.python/browse_frm/thread/158c8797ee2dccab/

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal to eliminate PySet_Fini

2006-07-02 Thread Jack Diederich
On Tue, Jun 27, 2006 at 02:09:19PM -0400, Alexander Belopolsky wrote:
 Setobject code allocates several internal objects on the heap that are
 cleaned up by the PySet_Fini function.  This is a fine design choice,
 but it often makes debugging applications with embedded python more
 difficult.
 
 I propose to eliminate the need for PySet_Fini as follows:
 
 1. Make dummy and emptyfrozenset static objects similar to Py_None
 2. Eliminate the free sets reuse scheme.
 
 The second proposal is probably more controversial, but is there any
 real benefit from that scheme when pymalloc is enabled?

These are optimizations and not likely to go away. tuples especially get
reused frequently.  In the case of tuples you can #define MAXSAVEDTUPLES
to zero in a custom python build to disable free-listing.  You can submit
a patch that #ifdef'd the other free list in a similar way (sets don't
currently have the ifdef check, for instance) and hope it gets accepted.
I don't see why it wouldn't.

PyObject_MALLOC does a good job of reusing small allocations but it
can't quite manage the same speed as a free list, especially for things that
have some extra setup involved (tuples have a free list for each length).

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's stop eating exceptions in dict lookup

2006-06-01 Thread Jack Diederich
On Wed, May 31, 2006 at 09:10:47PM -0400, Tim Peters wrote:
 [Martin Blais]
  I'm still looking for a benchmark that is not amazingly uninformative
  and crappy.  I've been looking around all day, I even looked under the
  bed, I cannot find it.  I've also been looking around all day as well,
  even looked for it shooting out of the Iceland geysirs, of all
  places--it's always all day out here it seems, day and day-- and I
  still can't find it.  (In the process however, I found Thule beer and
  strangely dressed vikings, which makes it all worthwhile.)
 
 For those who don't know, Martin stayed on in Iceland after the NFS
 sprint.  He shows clear signs above of developing photon madness.
 
 http://www.timeanddate.com/worldclock/astronomy.html?n=211
 
 Where that says sunset, don't read dark -- it just means the top
 of the sun dips a bit below the horizon for a few hours.  It never
 gets dark this time of year.
 
 If you haven't experienced this, no explanation can convey the
 other-worldly sense of it.

The CCP Games CEO said they have trouble retaining talent from more
moderate latitudes for this reason.  18 hours of daylight makes them a
bit goofy and when the Winter Solstice rolls around they are apt to go
quite mad.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Class decorators

2006-03-29 Thread Jack Diederich
On Wed, Mar 29, 2006 at 01:11:06AM -0500, Fred L. Drake, Jr. wrote:
 On Wednesday 29 March 2006 00:48, Fred L. Drake, Jr. wrote:
   I think the existing usage for classes is perfectly readable.  The
   @-syntax works well for functions as well.
 
 On re-reading what I wrote, I don't think I actually clarified the point I 
 was 
 trying to make originally.
 
 My point wasn't that I desparately need @-syntax for class decorators (I 
 don't), or see it as inherantly superior in some way.  It's much simpler than 
 that:  I just want to be able to use the same syntax for a group of use cases 
 regardless of whether the target is a function or a class.
 
 This fits into the nice-to-have category for me, since the use case can be 
 the 
 same regardless of whether I'm decorating a class or a function.  (I will 
 note that when this use case applies to a function, it's usually a 
 module-level function I'm decorating rather than a method.)

Agreed, let's not have the decorator syntax argument all over again.
Once someone knows how a function decorator works they should be able to guess
how a class decorator works.  In my old patch[1] the grammar production for 
decorators was:

  decorated_thing: decorators (funcdef|classdef)

Which makes sense, once you know how to decorate one thing you know how to
decorate all things.

-jackdied

[1] http://python.org/sf/1007991
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Class decorators

2006-03-29 Thread Jack Diederich
On Wed, Mar 29, 2006 at 07:23:03PM -0500, Phillip J. Eby wrote:
 At 11:07 AM 3/29/2006 -0800, Guido van Rossum wrote:
 On 3/28/06, Phillip J. Eby [EMAIL PROTECTED] wrote:
   If we're using Zope 3 as an example, I personally find that:
  
class Foo:
Docstring here, blah blah blah

implements(IFoo)
  
   is easier to read than:
  
@implements(IFoo)
class Foo:
Docstring here, blah blah blah

 
 But the former also smells more of magic.
 
 My comment above was only about readable *placement* of the decorators, not 
 the actual syntax.  Many approaches to the actual syntax in the body are 
 possible.
 
 For example, what did you think of Fred Drakes's @class proposal?  To 
 specify it formally, one could say that this:
 
  @class EXPR
 
 in a class scope would expand to the equivalent of:
 
  locals().setdefault('__decorators__',[]).append(EXPR)
 
 and is a syntax error if placed anywhere else.  That, combined with support 
 for processing __decorators__ at class creation time, would fulfill the 
 desired semantics without any implicit magic.
 

A function decorator takes a function as an argument and returns something
(probably a function and maybe even the very same function).
This is exactly what class decorators should do or we should call them
something else and give them a distinct syntax.

A function decorator is there to replace code like:

def myfunc(a, b, c):
  # half a screen of code
myfunc = mangle(myfunc)

Likewise class decorators would save me from typing

class MyClass:
  # many functions taking half a screen of code each
register(MyClass, db_id=20)

I used to do this with metaclasses but stopped because it requires making
'db_id' a member of the class which is magically noticed by a metaclass
two files away.  Using metaclasses also required gross hacks like checking
for a 'DO_NOT_REGISTER' member for subclasses that wanted to inherit from
a class that had a Register metaclass but didn't want to be registered.
Yuck.

If you want to do lots of Zopeish stuff mostly inside the class write
a decorator that looks for it in the class body.

@zope.find_magic_in_attr('mymagic')
class MyClass:
  mymagic = [] # some long hairy thing

Talking about something other than a decorator or proposing all new
syntax is just going to get this pronounced out of existence.

If-I-wanted-zope-I'd-use-zope-ly,

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Class decorators

2006-03-29 Thread Jack Diederich
[promted by Phillip Eby's post, but not in response so content snipped]

I think we both want class decorators as a more fine grained substitute
for __metaclass__ (fine grained as in declared per-class-instance instead
of this-class-and-all-its-children).  I can think of three ways class
decorators are used:

1) register pattern, use a class attribute or two to stick the class in
   a lookup table and optionally delete the meta-attribs from the class
2) __init__ pattern, examine the class and maybe munge it based on attribs
3) __new__ pattern, consider the class a namespace and use the attribs
   to generate an essentially new class based on the attribs

(the main difference between 2 and 3 is that the __new__ caseis a
total tear down and rebuild where 2 tends towards tweaks on the parent)

Classes have a unique property in that they are the easiest way to make
little namespaces in python.  Modules require lots of file clutter,
functions as namespaces require digging into the internals, dicts as
namespaces require more finger typing and don't support attribute access.

It turns out I have two use cases for class decorators and didn't even
know it.  One is the 'register' pattern that started this thread.  In
that case I just want to move the metadata outside the class 
(the @register(db_id=20) case) and the rest of the class definition is
honest to goodness overriding of a method or two from the parent class
to change the behavior of its instances.  The other pattern I hadn't thought
of would be a '@namespace' decorator.  A @namespace decorator would strip 
the attribs of all class-like qualities - it would strip the class of all 
descriptor magic (using descriptors, of course!).  I just want a convenient 
bag to stick related items in.

The first time I ever used metaclasses was to make PLY[1] (a SLR parser)
use per-class namespaces for lexers and grammars instead of per-module.
The metaclass chewed through all the class attributes and returned
a class that shared no attributes with the original - each class was 
basically a config file.  The decorator version would be spelled
'@make_a_lexer_class' or '@make_a_grammar_class'.  

PEAK and Zope seem like they do a mix of __init__ and __new__, my current
use cases are just 'notice' (I'm not a user, so feel free to correct).
I like the func-like decorator syntax because I have just a bit of
metadata that I'd like to move outside the class.  PEAK/Zope sounds
like they use classes as a mix of class and namespace.  Their class
decorator would return a hybrid class that has applied the namespace
parts (post processing) to the class parts.  A partly new class.

I'd like that spelled:
@tweak_this_class
class MyClass:
  namespacey_stuff = ...
  def class_thing(self, foo): pass

That leaves a class decorator behaving like a function decorator.  It
is a declaration that the final product (maybe a completely different
thing as in the PLY case) is the result of the tweak_this_class function.

Writing the above certainly helped me clear up my thoughts on the issue,
I hope it hasn't gone and muddied it for anyone else :)

-jackdied

[1] http://savannah.nongnu.org/projects/ply/
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Class decorators

2006-03-28 Thread Jack Diederich
On Tue, Mar 28, 2006 at 10:16:01AM -0800, Neal Norwitz wrote:
 On 3/28/06, Guido van Rossum [EMAIL PROTECTED] wrote:
 
  I propose that someone start writing a Py3k PEP for class decorators.
  I don't  think it's fair to the 2.5 release team to want to push this
  into 2.5 though; how about 2.6?
 
 Wasn't there already a (pretty small) patch?  I guess it would be
 different now with the AST though.
 
I submitted one a couple years ago.  The AST makes it obsolete though.
I'd be happy to make a new AST friendly one (time to learn the AST..)

-Jack

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] collections.idset and collections.iddict?

2006-03-06 Thread Jack Diederich
On Mon, Mar 06, 2006, Guido van Rossum wrote:
 On 3/6/06, Raymond Hettinger [EMAIL PROTECTED] wrote:
  [Neil Schemenauer]
  I occasionally need dictionaries or sets that use object identity
   rather than __hash__ to store items.  Would it be appropriate to add
   these to the collections module?
 
  Why not decorate the objects with a class adding a method:
 def __hash__(self):
 return id(self)
 
  That would seem to be more Pythonic than creating custom variants of other
  containers.
 
 I hate to second-guess the OP, but you'd have to override __eq__ too,
 and probably __ne__ and __cmp__ just to be sure. And probably that
 wouldn't do -- since the default __hash__ and __eq__ have the desired
 behavior, the OP is apparently talking about objects that override
 these operations to do something meaningful; overriding them back
 presumably breaks other functionality.

I assumed Neil wanted a container that was id() sensitive, I've done this
occasionally myself to see if an object is in a container and not just
an object equivalent to the one I am checking for.

 a = set([1,2,3,4])
 b = set([1,2,3,4])
 a == b
True
 a is b
False
 container = [a]
 b in container
True
 container = [id(a)]
 id(b) in container
False
 

-Jack

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Webstats for www.python.org et al.

2006-03-01 Thread Jack Diederich
On Wed, Mar 01, 2006 Brett Cannon wrote:
 On 2/28/06, Fredrik Lundh [EMAIL PROTECTED] wrote:
  Thomas Wouters wrote:
 
   I added webstats for all subsites of python.org:
  
   http://www.python.org/webstats/
 
  what's that Java/1.4.2_03 user agent doing?  (it's responsible for
  10% of all hits in january/february, and 20% of the hits today...)
 
 Most likely a crawler.
 
Youch, if I'm reading it right it consumed fully half of the bandwidth
for today on python.org.  And what 1.6 million pages did it spider on
the site last month?  Something smells broken.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal: defaultdict

2006-02-17 Thread Jack Diederich
On Thu, Feb 16, 2006 at 01:11:49PM -0800, Guido van Rossum wrote:
[snip]
 Google has an internal data type called a DefaultDict which gets
 passed a default value upon construction. Its __getitem__ method,
 instead of raising KeyError, inserts a shallow copy (!) of the given
 default value into the dict when the value is not found. So the above
 code, after
 
   d = DefaultDict([])
 
 can be written as simply
 
   d[key].append(value)
 
 Note that of all the possible semantics for __getitem__ that could
 have produced similar results (e.g. not inserting the default in the
 underlying dict, or not copying the default value), the chosen
 semantics are the only ones that makes this example work.

Having __getitem__ insert the returned default value allows it to
work with a larger variety of classes.  My own ForgivingDict does not
do this and works fine for ints and lists but not much else.

fd = ForgivingDict(list)
fd[key] += [val] # extends the list and does a __setitem__

The += operator isn't useful for dicts.

How can you make a defaultdict with a defaultdict as the default?
My head asploded when I tried it with the constructor arg.
It does seem possible with the 'd.default = func' syntax

# empty defaultdict constructor
d = defaultdict()
d.default = d
tree = defaultdict()
tree.default = d.copy

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Counter proposal: multidict (was: Proposal: defaultdict)

2006-02-17 Thread Jack Diederich
On Fri, Feb 17, 2006 at 03:03:06PM -0500, Fred L. Drake, Jr. wrote:
 On Friday 17 February 2006 14:51, Ian Bicking wrote:
   and in the process breaking an important 
   quality of good Python code, that attribute and getitem access not have
   noticeable side effects.
 
 I'm not sure that's quite as well-defined or agreed upon as you do.
 
Without the __getitem__ side effect default objects that don't support any
operators would have problems.

  d[key] += val

works fine when the default is a list or int but fails for dicts and presumably
many user defined objects.  By assigning the default value in __getitem__ the
returned value can be manipulated via its methods.

-Jack

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bytes type discussion

2006-02-16 Thread Jack Diederich
On Thu, Feb 16, 2006 at 06:13:53PM +0100, Fredrik Lundh wrote:
 Barry Warsaw wrote:
 
  We know at least there will never be a 2.10, so I think we still have
  time.
 
 because there's no way to count to 10 if you only have one digit?
 
 we used to think that back when the gas price was just below 10 SEK/L,
 but they found a way...
 

Of course they found a way.  The alternative was cutting taxes.

whish-I-was-winking,

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] _length_cue()

2006-02-09 Thread Jack Diederich
[Raymond Hettinger]
 [Armin Rigo]
  BTW the reason I'm looking at this is that I'm considering adding
  another undocumented internal-use-only method, maybe __getitem_cue__(),
  that would try to guess what the nth item to be returned will be.  This
  would allow the repr of some iterators to display more helpful
  information when playing around with them at the prompt, e.g.:
 
  enumerate([3.1, 3.14, 3.141, 3.1415, 3.14159, 3.141596])
  enumerate (0, 3.1), (1, 3.14), (2, 3.141),... length 6
 
 At one point, I explored and then abandoned this idea.  For objects like 
 itertools.count(n), it worked fine -- the state was readily knowable and the 
 eval(repr(obj)) round-trip was possible.  However, for tools like 
 enumerate(), it didn't make sense to have a preview that only applied in a 
 tiny handful of (mostly academic) cases and was not evaluable in any case.
 

That is my experience too.  Even for knowable sequences people consume
it in series and not just one element.  My permutation module supports 
pulling out just the Nth canonical permutation.  Lots of people have
used the module and no one uses that feature.

 import probstat
 p = probstat.Permutation(range(4))
 p[0]
[0, 1, 2, 3]
 len(p)
24
 p[23]
[3, 2, 1, 0]
 

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] str with base

2006-01-17 Thread Jack Diederich
On Tue, Jan 17, 2006 at 04:02:43PM -0800, Guido van Rossum wrote:
 On 1/17/06, Adam Olsen [EMAIL PROTECTED] wrote:
   In-favour-of-%2b-ly y'rs,
 
  My only opposition to this is that the byte type may want to use it.
  I'd rather wait until byte is fully defined, implemented, and released
  in a python version before that option is taken away.
 
 Has this been proposed? What would %b print?
 
It was proposed in this or another thread about the same in the last few
days (gmane search doesn't like the % in '%b').

The suggestion is to add 'b' as a sprintf-like format string
  %[base][.pad]b

Where the optional base is the base to print in and pad is the optional
minimum length of chars to print (as I recall).  Default is base 2.

Me?  I like it.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] str with base

2006-01-17 Thread Jack Diederich
On Tue, Jan 17, 2006 at 06:11:36PM -0800, Bob Ippolito wrote:
 
 On Jan 17, 2006, at 5:01 PM, Jack Diederich wrote:
 
 On Tue, Jan 17, 2006 at 04:02:43PM -0800, Guido van Rossum wrote:
 On 1/17/06, Adam Olsen [EMAIL PROTECTED] wrote:
 In-favour-of-%2b-ly y'rs,
 
 My only opposition to this is that the byte type may want to use it.
 I'd rather wait until byte is fully defined, implemented, and  
 released
 in a python version before that option is taken away.
 
 Has this been proposed? What would %b print?
 
 It was proposed in this or another thread about the same in the  
 last few
 days (gmane search doesn't like the % in '%b').
 
 The suggestion is to add 'b' as a sprintf-like format string
   %[base][.pad]b
 
 Where the optional base is the base to print in and pad is the  
 optional
 minimum length of chars to print (as I recall).  Default is base 2.
 
 Me?  I like it.
 
 Personally I would prefer the b format code to behave similarly to  
 o, d, and d, except for binary instead of octal, decimal, and  
 hexadecimal.  Something that needs to account for three factors (zero  
 pad, space pad, base) should probably be a function (maybe a  
 builtin).  Hell, maybe it could take a fourth argument to specify how  
 a negative number should be printed (e.g. a number of bits to use for  
 the 2's complement).
 
 However... if %b were to represent arbitrary bases, I think that's  
 backwards.  It should be %[pad][.base]b, which would do this:
 
'%08b %08o %08d %08x' % 12
   '1100 0014 0012 000C'
 

Were I BDFAD (not to be confused with BD-FOAD) I'd add %b, %B and, binary()
to match %x, %X, and hex().  The arbitrary base case isn't even academic
or we would see homework questions about it on c.l.py.  No one asks about
hex or octal because they are there.  No one asks about base seven
formatting because everyone knows numerologists prefer Perl.

-Jack

nb, that's For A Day.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] __doc__ behavior in class definitions

2005-10-07 Thread Jack Diederich
On Fri, Oct 07, 2005 at 12:15:04PM -0700, Martin Maly wrote:
 Hello Python-Dev,
  
 My name is Martin Maly and I am a developer at Microsoft, working on the
 IronPython project with Jim Hugunin. I am spending lot of time making
 IronPython compatible with Python to the extent possible.
 
 I came across a case which I am not sure if by design or a bug in Python
 (Python 2.4.1 (#65, Mar 30 2005, 09:13:57)). Consider following Python
 module:
 
 # module begin
 module doc
 
 class c:
 print __doc__
 __doc__ = class doc (1)
 print __doc__

[snip]

 Based on the binding rules described in the Python documentation, I
 would expect the code to throw because binding created on the line (1)
 is local to the class block and all the other __doc__ uses should
 reference that binding. Apparently, it is not the case.
 
 Is this bug in Python or are __doc__ strings in classes subject to some
 additional rules?

Classes behave just like you would expect them to, for proper variations
of what to expect *wink*.

The class body is evaluated first with the same local/global name lookups
as would happen inside another scope (e.g. a function).  The results
of that evaluation are then passed to the class constructor as a dict.
The __new__ method of metaclasses and the less used 'new' module highlight
the final step that turns a bucket of stuff in a namespace into a class.

 import new
 A = new.classobj('w00t', (object,), {'__doc__':no help at all, 
 'myself':lambda x:x})
 a = A()
 a.myself()
__main__.w00t object at 0xb7bc32cc
 a
__main__.w00t object at 0xb7bc32cc
 help(a)
Help on w00t in module __main__ object:

class w00t(__builtin__.object)
 |  no help at all
 |  
 |  Methods defined here:
 |  
 |  lambdax
 |


Hope that helps,

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] For/while/if statements/comprehension/generator expressions unification

2005-09-20 Thread Jack Diederich
On Tue, Sep 20, 2005 at 10:20:44AM -0700, Josiah Carlson wrote:
 Try using the code I offered.  It allows the cross of an aribitrary
 number of restartable iterables, in the same order as an equivalent list
 comprehension or generator expression.
 
  list(cross([1,2], [3,4], [5,6]))
 [(1, 3, 5), (1, 3, 6), (1, 4, 5), (1, 4, 6), (2, 3, 5), (2, 3, 6), (2, 4,
 5), (2, 4, 6)]
 
 There were a few hoops I had to jump through in cross in order to be
 able to hande single iterables as well as tuples embedded in the passed
 iterables, but they work as they should.
 
  list(cross([(1,1),(2,2)], [(3,3),(4,4)], [(5,5),(6,6)]))
 [((1, 1), (3, 3), (5, 5)), ((1, 1), (3, 3), (6, 6)), ((1, 1), (4, 4), (5,
 5)), ((1, 1), (4, 4), (6, 6)), ((2, 2), (3, 3), (5, 5)), ((2, 2), (3, 3),
 (6, 6)), ((2, 2), (4, 4), (5, 5)), ((2, 2), (4, 4), (6, 6))]

This comes up on c.l.p every month or two.  Folks offer their own solutions
optimized for speed, memory, or golfed for char length.  I'll throw in my
same two bits as always,

sprat:~# python
Python 2.4.1 (#2, Mar 30 2005, 21:51:10) 
[GCC 3.3.5 (Debian 1:3.3.5-8ubuntu2)] on linux2
Type help, copyright, credits or license for more information.
 import probstat
 c = probstat.Cartesian([[1,2], [3,4], [5,6]])
 list(c)
[[1, 3, 5], [2, 3, 5], [1, 4, 5], [2, 4, 5], [1, 3, 6], [2, 3, 6], [1, 4, 6], 
[2, 4, 6]]
 c = probstat.Cartesian([[(1,1),(2,2)], [(3,3),(4,4)], [(5,5),(6,6)]])
 list(c)
[[(1, 1), (3, 3), (5, 5)], [(2, 2), (3, 3), (5, 5)], [(1, 1), (4, 4), (5, 5)], 
[(2, 2), (4, 4), (5, 5)], [(1, 1), (3, 3), (6, 6)], [(2, 2), (3, 3), (6, 6)], 
[(1, 1), (4, 4), (6, 6)], [(2, 2), (4, 4), (6, 6)]]

The signature is slightly different (list of lists) but otherwise does
what you want.  Unchanged since 2002!

http://probstat.sourceforge.net/

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding a conditional expression in Py3.0

2005-09-20 Thread Jack Diederich
On Tue, Sep 20, 2005 at 09:04:53AM -0700, Guido van Rossum wrote:
 On 9/20/05, Michael Hudson [EMAIL PROTECTED] wrote:
  Guido van Rossum [EMAIL PROTECTED] writes:
  
   On 9/19/05, Raymond Hettinger [EMAIL PROTECTED] wrote:
   I propose that in Py3.0, the and and or operators be simplified to
   always return a Boolean value instead of returning the last evaluated
   argument.
  
   While you're at it, maybe we should switch to  and || as well?
   That's another thing I always mistype when switching between
   languages...
  
  You're joking, surely?
 
 I wasn't, but I wasn't pushing for it either. Consider it withdrawn
 given the response.
 
   Also, this proposal needs to be considered together with the addition
   of a proper conditional operator, like x?y:z.
  
  I think this discussion has been had before, you know.
 
 Unfortunately, if we were to accept Raymond's proposal, we'd have to
 revisit the discussion, since his proposal removes several ways we
 currently avoid the need.
 
 In fact, I think Raymond's example is more properly considered an
 argument for adding a conditional expression than for removing the
 current behavior of the and/or shortcut operators; had we had a
 conditional expression, he wouldn't have tried to use the x and y or
 z syntax that bit him.
 
 Given this realization, I'm now -1 on Raymond's idea, and +1 on adding
 a conditional expression. I believe (y if x else z) was my favorite
 last time, wasn't it? I've changed the subject accordingly.

I'm +1 for allowing authors to write
  return bool(thing or default)
when they mean a function to return a bool.  I had the privilege of
working in a large base of perl code (a learning experience) and while
some engineers abused functions that were documented as only returning
true/false by depending on the side effects of 'and' and 'or' this
was easily fixed.  The functions were changed to literally return
a plain true or false value and those engineers were summarily fired.

I'm a dependable Hettinger fanboy but on this one I have to agree
with the we're-all-adults club.  Let the authors type an extra few
chars if they want to make the code match the intent.

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] itertools.chain should take an iterable ?

2005-09-01 Thread Jack Diederich
On Thu, Sep 01, 2005 at 07:58:40PM +0200, Paolino wrote:
 Working on a tree library I've found myself writing 
 itertools.chain(*[child.method() for child in self]).
 Well this happened after I tried instinctively 
 itertools.chain(child.method() for child in self).
 
 Is there a reason for this signature ?

This is more suited to comp.lang.python

Consider the below examples (and remember that strings are iterable)

 import itertools as it
 list(it.chain('ABC', 'XYZ'))
['A', 'B', 'C', 'X', 'Y', 'Z']
 list(it.chain(['ABC', 'XYZ']))
['ABC', 'XYZ']
 list(it.chain(['ABC'], ['XYZ']))
['ABC', 'XYZ']
 

Hope that helps,

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Replacement for print in Python 3.0

2005-09-01 Thread Jack Diederich
On Thu, Sep 01, 2005 at 02:46:13PM -0600, Charles Cazabon wrote:
 Bill Janssen [EMAIL PROTECTED] wrote:
  I don't use print myself much, but for the occasional 3-line script.
  But I think the user-friendliness of it is a good point, and makes up
  for the weirdness of it all.  There's something nice about being able
  to write
  
print the answer is, 3*4+10
  
  which is one of the reasons ABC and BASIC have it that way.

I don't use print much.  For online applications I call a socket write
or for web apps store up all the HTML in a buffer and only write it out
at the end (to allow code anywhere to raise a Redirect exception).
I don't use print for quick and dirty debugging, but this

def dump(*args):
  sys.stderr.write('%s\n' % (repr(args)))

 Providing you can live with adding a pair of parentheses to that, you can
 have:
 
def print(*args):
   sys.stdout.write(' '.join(args) + '\n')
 
 I think the language would be cleaner if it lacked this weird exception for
 `print`.

Me too, for real usage.  Tutorials would get messier but how quickly do
people move on from those anyway?

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Replacement for print in Python 3.0

2005-09-01 Thread Jack Diederich
On Thu, Sep 01, 2005 at 11:12:57PM +0200, Fredrik Lundh wrote:
 Charles Cazabon wrote:
 
  in fact, it does nothing for the program but merely has the interesting
  side-effect of writing to stdout.
 
 yeah, real programmers don't generate output.
 
I'd say:
  yeah, real programmers don't generate output _to stdout_

sockets, GUI widgets, buffers? sure.  stdout?  Almost never.
Most of these don't have write() methods so I've never had a reason to
use the print  syntax.

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 347: Migration to Subversion

2005-08-16 Thread Jack Diederich
On Tue, Aug 16, 2005 at 10:08:26PM +1000, Anthony Baxter wrote:
 On Tuesday 16 August 2005 21:42, Michael Hudson wrote:
  I want svn, I think.  I'm open to more sophisticated approaches but am
  not sure that any of them are really mature enough yet.  Probably will
  be soon, but not soon enough to void the effort of moving to svn
  (IMHO).
 
  I'm not really a release manager these days, but if I was, I'd wand
  svn for that reason too.
 
 I _am_ a release manager these days, and I'm in favour of svn. I really
 want to be off CVS, and I would love to be able to go with something
 more sophisticated than svn. Unfortunately, I really don't think any of
 the alternatives are appropriate.

As a non-committer I can say _anything_ is preferable to the current
situation and svn is good enough.  bzr might make it even easier but svn
is familiar and it will work right now.  I haven't submitted a patch in
ages partly because using anonymous SF cvs plain doesn't work.

aside, at work we switched from cvs to svn and it the transition was
easy for developers, svn lives up to its billing as a fixed cvs.

-jack

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Major revision of PEP 348 committed

2005-08-09 Thread Jack Diederich
On Tue, Aug 09, 2005 at 12:28:08AM -0600, Steven Bethard wrote:
 Raymond Hettinger wrote:
  If the PEP can't resist the urge to create new intermediate groupings,
  then start by grepping through tons of Python code to find-out which
  exceptions are typically caught on the same line.  That would be a
  worthwhile empirical study and may lead to useful insights.
 
 I was curious, so I did a little grepping (ok, os.walking and
 re.findalling) ;-) through the Python source.  The only exceptions
 that were caught together more than 5 times were:
 
 AttributeError and TypeError (23 instances)
 ImportError and AttributeError (9 instances)
 OverflowError and ValueError (9 instances)
 IOError and OSError (6 instances)

I grepped my own source (ok, find, xargs, and grep'd ;) and here is
what I found.  40 KLOCs, it is a web app so I mainly catch multiple
exceptions when interpreting URLs and doing type convertions.  Unexpected
quacks from inside the app are allowed to rise to the top because at
that point all the input should be in a good state.

All of these arise because more than one operation is happening
in the try/except each of which could raise an exception (even if it
is a one-liner).

ValueError, TypeError (6 instances)
Around calls to int() like
  foo = int(cgi_dict.get('foo', None))

This is pretty domain specific, cgi variables are in a dict-alike object
that returns None for missing keys.  If it was a proper dict instead
this pairing would be (ValueError, KeyError).

The rest are a variation on the above where the result is used in the
same couple lines to do some kind of a lookup in a dict, list, or
namespace.
  client_id = int(cgi_dict.get('foo', None))
  client_name = names[client_id]

ValueError, TypeError, AttributeError (2 instances)
ValueError, TypeError, KeyError (3 instances)
ValueError, TypeError, IndexError (3 instances)

And finally this one because bsddb can say Failed in more than one way.

IOError, bsddb.error (2 incstances)
  btree = bsddb.btopen(self.filename, open_type)


-Jack

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Chaining try statements: eltry?

2005-07-07 Thread Jack Diederich
On Thu, Jul 07, 2005 at 03:03:35PM -0400, Phillip J. Eby wrote:
 At 02:48 PM 7/7/2005 -0400, Tim Peters wrote:
 [Guido, on {for,while}/else]
 ...
   The question remains whether Python would be easier to learn without
   them. And if so, the question would remain whether that's offset by
   their utility for experienced developers. All hard to assess
   impartially!
 
 That's what I'm here for.  I like loop else clauses, but have to
 admit that (a) I rarely use them; (b) most of the time I use them, my
 control flow is on the way to becoming so convoluted that I'm going to
 rewrite the whole function soon anyway;
 
 Interesting; I usually intentionally write else clauses intending to 
 *clarify* the code.  That is, I often use it in cases where it's not 
 strictly necessary, e.g.:
 
 for thing in something:
 if blah:
 return thing
 else:
 return None
 
 Because to me this clarifies that 'return None' is what happens if the loop 
 fails.

I use else similarly, for defensive programming.

for (thing) in searchlist:
  if (matches(thing)):
keeper = thing
break
else:
  raise ValueError(No thing matches())

I can't say I use it for much else, if I really want a default I do
found = None
for (thing) in searchlist:
  if (matches(thing)):
found = None
break

That could end with 'else: found = None' to assign a default but I like
the default to come first for readability.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 - Abstract Block Redux

2005-05-16 Thread Jack Diederich
On Mon, May 16, 2005 at 06:24:59PM +1200, Greg Ewing wrote:
 Brett C. wrote:
 
  Nick's was obviously directly against looping, but, with no offense to Nick,
  how many other people were against it looping?  It never felt like it was a
  screaming mass with pitchforks but more of a I don't love it, but I can 
  deal
  crowd.
 
 My problem with looping was that, with it, the semantics
 of a block statement would be almost, but not quite,
 exactly like those of a for-loop, which seems to be
 flying in the face of TOOWTDI. And if it weren't for
 the can't-finalise-generators-in-a-for-loop backward
 compatibility problem, the difference would be even
 smaller.

Nodders, the looping construct seemed to work out fine as code people
could use to get their heads around the idea.  It was eye-gougingly bad
as final solution.

Forcing people to write an iterator for something that will almost never
loop is as awkward as forcing everyone to write if statements as

for dummy in range(1):
  if (somevar):
do_true_stuff()
break
else:
  do_false_stuff()

I still haven't gotten used to Guido's heart-attack inducing early 
enthusiasm for strange things followed later by a simple proclamation
I like.  Some day I'll learn that the sound of fingernails on the
chalkboard is frequently followed by candy for the whole class.  
For now the initial stages still give me the shivers.

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 344: Exception Chaining and Embedded Tracebacks

2005-05-16 Thread Jack Diederich
On Mon, May 16, 2005 at 08:09:54PM -0500, Ka-Ping Yee wrote:
 On Mon, 16 May 2005, Aahz wrote:
  I'll comment here in hopes of staving off responses from multiple
  people: I don't think these should be double-underscore attributes.  The
  currently undocumented ``args`` attribute isn't double-underscore, and I
  think that's precedent to be followed.
 
 That isn't the criterion i'm using, though.  Here's my criterion, and
 maybe then we can talk about what the right criterion should be:
 
 System attributes are for protocols defined by the language.
 
 (I'm using the term system attribute here to mean an attribute with
 a double-underscore name, which i picked up from something Guido
 wrote a while back [1].)
 
 For example, __init__, __add__, __file__, __name__, etc. are all
 attributes whose meaning is defined by the language itself as opposed
 to the Python library.  A good indicator of this is the fact that
 their names are hardcoded into the Python VM.  I reasoned that
 __cause__, __context__, and __traceback__ should also be system
 attributes since their meaning is defined by Python.
 
 Exceptions are just classes; they're intended to be extended in
 arbitrary application-specific ways.  It seemed a good idea to leave
 that namespace open.

I prefer trichomomies over dichotomies, but whether single or double 
underscores are the bad or the ugly I'll leave to others.  In python
double underscores can only mean I don't handle this, my class does or
I'm a C++ weenie, can I pretend this is private?

Excluding the private non-argument the only question is where it goes 
in the class hierarchy.  Is it a property you would normally associate
with the instance, the class of an instance, or the class of a class (type).

To me it feels like a property of the instance.  The values are never
shared by expections of the class so just make it a plain variable to remind
other people of that too.

-jackdied
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] anonymous blocks

2005-04-19 Thread Jack Diederich
On Tue, Apr 19, 2005 at 01:33:15PM -0700, Guido van Rossum wrote:
  @acquire(myLock):
  code
  code
  code
 
 It would certainly solve the problem of which keyword to use! :-) And
 I think the syntax isn't even ambiguous -- the trailing colon
 distinguishes this from the function decorator syntax. I guess it
 would morph '@xxx' into user-defined-keyword.
 
 How would acquire be defined? I guess it could be this, returning a
 function that takes a callable as an argument just like other
 decorators:
[snip]
 and the substitution of
 
 @EXPR:
 CODE
 
 would become something like
 
 def __block():
 CODE
 EXPR(__block)
 
 I'm not yet sure whether to love or hate it. :-)
 
I don't know what the purpose of these things is, but I do think
they should be like something else to avoid learning something new.

Okay, I lied, I do know what these are: namespace decorators
Namespaces are currently modules or classes, and decorators currently
apply only to functions.  The dissonance is that function bodies are
evaluated later and namespaces (modules and classes) are evaluated
immediately.  I don't know if adding a namespace that is only evaluated
later makes sense.  It is only an extra case but it is one extra case
to remember.  At best I have only channeled Guido once, and by accident[1]
so I'll stay out of the specifics (for a bit).

-jackdied

[1] during the decorator syntax bru-ha-ha at a Boston PIG meeting I
suggested Guido liked the decorator-before-function because it made
more sense in Dutch.  I was kidding, but someone who knows a little Dutch
(Deibel?) stated this was, in fact, the case.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Inconsistent exception for read-only properties?

2005-04-17 Thread Jack Diederich
On Sun, Apr 17, 2005 at 11:53:31AM -0400, Jack Diederich wrote:
 On Sat, Apr 16, 2005 at 07:24:27PM -0400, Barry Warsaw wrote:
  On Thu, 2005-04-14 at 23:46, Barry Warsaw wrote:
   I've noticed an apparent inconsistency in the exception thrown for
   read-only properties for C extension types vs. Python new-style
   classes.
  
  I haven't seen any follow ups on this, so I've gone ahead and posted a
  patch, assigning it to Raymond:
  
  http://sourceforge.net/tracker/index.php?func=detailaid=1184449group_id=5470atid=105470
  
 In 2.4  2.3 does it make sense to raise an exception that multiply inherits
 from both TypeError and AttributeError?  If anyone currently does catch the
 error raising only AttributeError will break their code.  2.5 should just
 raise an AttributeError, of course.
 
 If that's acceptable I'll gladly submit a similar patch for mmap.get_byte()
   PyErr_SetString (PyExc_ValueError, read byte out of range);
 has always irked me (the same thing with mmap[i] is an IndexError).
 I hadn't thought of a clean way to fix it, but MI on the error might work.
 

I just did a quick grep for raised ValueErrors with range in the
explanation string and didn't find any general consensus.  I dunno what 
that means, if anything.

wopr:~/src/python_head/dist/src# find ./ -name '*.c' | xargs grep ValueError | 
grep range | wc -l
13
wopr:~/src/python_head/dist/src# find ./ -name '*.c' | xargs grep IndexError | 
grep range | wc -l
31

(long versions below)

-jackdied

wopr:~/src/python_head/dist/src# find ./ -name '*.c' | xargs grep -n IndexError 
| grep range 
./Modules/arraymodule.c:599:PyErr_SetString(PyExc_IndexError, 
array index out of range);
./Modules/arraymodule.c:997:PyErr_SetString(PyExc_IndexError, pop 
index out of range);
./Modules/mmapmodule.c:639: PyErr_SetString(PyExc_IndexError, mmap 
index out of range);
./Modules/mmapmodule.c:727: PyErr_SetString(PyExc_IndexError, mmap 
index out of range);
./Modules/_heapqmodule.c:19:PyErr_SetString(PyExc_IndexError, 
index out of range);
./Modules/_heapqmodule.c:58:PyErr_SetString(PyExc_IndexError, 
index out of range);
./Modules/_heapqmodule.c:136:   PyErr_SetString(PyExc_IndexError, 
index out of range);
./Modules/_heapqmodule.c:173:   PyErr_SetString(PyExc_IndexError, 
index out of range);
./Modules/_heapqmodule.c:310:   PyErr_SetString(PyExc_IndexError, 
index out of range);
./Modules/_heapqmodule.c:349:   PyErr_SetString(PyExc_IndexError, 
index out of range);
./Objects/bufferobject.c:403:   PyErr_SetString(PyExc_IndexError, 
buffer index out of range);
./Objects/listobject.c:876: PyErr_SetString(PyExc_IndexError, pop 
index out of range);
./Objects/rangeobject.c:94: PyErr_SetString(PyExc_IndexError,
./Objects/stringobject.c:1055:  PyErr_SetString(PyExc_IndexError, 
string index out of range);
./Objects/structseq.c:62:   PyErr_SetString(PyExc_IndexError, 
tuple index out of range);
./Objects/tupleobject.c:104:PyErr_SetString(PyExc_IndexError, 
tuple index out of range);
./Objects/tupleobject.c:310:PyErr_SetString(PyExc_IndexError, 
tuple index out of range);
./Objects/unicodeobject.c:5164:PyErr_SetString(PyExc_IndexError, 
string index out of range);
./Python/exceptions.c:1504:PyDoc_STRVAR(IndexError__doc__, Sequence index out 
of range.);
./RISCOS/Modules/drawfmodule.c:534:  { PyErr_SetString(PyExc_IndexError,drawf 
index out of range);
./RISCOS/Modules/drawfmodule.c:555:  { PyErr_SetString(PyExc_IndexError,drawf 
index out of range);
./RISCOS/Modules/drawfmodule.c:578:  { PyErr_SetString(PyExc_IndexError,drawf 
index out of range);
./RISCOS/Modules/swimodule.c:113:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:124:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:136:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:150:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:164:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:225:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:237:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:248:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
./RISCOS/Modules/swimodule.c:264:  { PyErr_SetString(PyExc_IndexError,block 
index out of range);
wopr:~/src/python_head/dist/src# find ./ -name '*.c' | xargs grep -n ValueError 
| grep range
./Modules/mmapmodule.c:181: PyErr_SetString (PyExc_ValueError, 
read byte out of range);
./Modules/mmapmodule.c:301: PyErr_SetString (PyExc_ValueError, 
data out of range);
./Modules/mmapmodule.c:524: PyErr_SetString (PyExc_ValueError, seek out

Re: [Python-Dev] @decoration of classes

2005-03-28 Thread Jack Diederich
On Sat, Mar 26, 2005 at 12:36:08PM -0800, Josiah Carlson wrote:
 
 Eric Nieuwland [EMAIL PROTECTED] wrote:
  
  Given the ideas so far, would it possible to:
  
  def meta(cls):
  ...
  
  @meta
  class X(...):
  ...
 
 It is not implemented in Python 2.4.  From what I understand, making it
 happen in Python 2.5 would not be terribly difficult.  The question is
 about a compelling use case.  Is there a use where this syntax is
 significantly better, easier, etc., than an equivalent metaclass?  Would
 people use the above syntax if it were available?
 
For compelling, I think the code smell put off by the no conflict metaclass
generator recipe (which also appeared in Alex Martelli's PyCon talk) is fairly
compelling from a duck typing point of view.  

# would you rather
class K:
  __metaclass__ = no_conflict(MetaA, MetaB)
# or
@decoA
@decoB
class K: pass

Unless you actually want a class two inherit magic methods from two different
types you don't need two metaclasses.  You just need the class manipulations 
that are done in two different metaclasses.

I get around this[1] by defining a function that calls things that manipulate
classes, the metaclass's init will make the 'register' function static if it
is defined in the __dict__ and then call it with (name, cls).
If I called that method 'decorate' instead and spelled it @decorate I'd be
a happy camper.

-jackdied

[1] Register metatype, define the register method to screw around with
your class definition or leave it out to let your parent class do its thing

class Register(type):
  def __init__(cls, name, bases, dict):
if ('register' in dict):
  setattr(cls, 'register', staticmethod(dict['register']))
cls.register(name, cls)

I call it Register because largely I just use it to check the __dict__ for
special methods and put classes in one or more global buckets.  I have cron
jobs that operate on the different buckets.

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding any() and all()

2005-03-10 Thread Jack Diederich
On Thu, Mar 10, 2005 at 10:22:45PM -0500, Raymond Hettinger wrote:
 [Bill Janssen]
  I think I'd want them to be:
  
  def any(S):
for x in S:
  if x:
return x
return S[-1]
  
  def all(S):
for x in S:
  if not x:
return x
return S[-1]
  
  Or perhaps these should be called first and last.
 
 -1
 
 Over time, I've gotten feedback about these and other itertools recipes.
 No one has objected to the True/False return values in those recipes or
 in Guido's version.  
 
 Guido's version matches the normal expectation of any/all being a
 predicate.  Also, it avoids the kind of errors/confusion that people
 currently experience with Python's unique implementation of and and
 or.
 
 Returning the last element is not evil; it's just weird, unexpected, and
 non-obvious.  Resist the urge to get tricky with this one.  

Perl returns the last true/false value as well, and it is a subtle trap.
I worked at a perl shop that had a long style guide which outlawed using
that side effect but people did anyway.  I'm +1 on having it return a
true boolean for the same reason if (x = calc_foo()): isn't supported,
if there is a quirky side effect available someone will use it and someone
else won't notice.
[in fact we had a guy who was downsized for doing things like calling
database queries on the return value of a function that was documented
as returning True/False]

Perl shops require long style guides becuase perl is, ummm, perl. Python
is a boon because YAGN (a style guide).  The fewer side effects the better.

-Jack
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's get rid of unbound methods

2005-01-04 Thread Jack Diederich
On Tue, Jan 04, 2005 at 10:28:03AM -0800, Guido van Rossum wrote:
 In my blog I wrote:
 
 Let's get rid of unbound methods. When class C defines a method f, C.f
 should just return the function object, not an unbound method that
 behaves almost, but not quite, the same as that function object. The
 extra type checking on the first argument that unbound methods are
 supposed to provide is not useful in practice (I can't remember that
 it ever caught a bug in my code) and sometimes you have to work around
 it; it complicates function attribute access; and the overloading of
 unbound and bound methods on the same object type is confusing. Also,
 the type checking offered is wrong, because it checks for subclassing
 rather than for duck typing.
 
 Does anyone think this is a bad idea? Anyone want to run with it?
 
I like the idea, it means I can get rid of this[1]

func = getattr(cls, 'do_command', None)
setattr(cls, 'do_command', staticmethod(func.im_func)) # don't let anyone on 
c.l.py see this

.. or at least change the comment *grin*,

-Jack

[1] 
http://cvs.sourceforge.net/viewcvs.py/lyntin/lyntin40/sandbox/leantin/mudcommands.py?view=auto
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com