Re: [Python-Dev] Draft PEP for time zone support.

2012-12-11 Thread Robert Brewer
Guido van Rossum wrote:
 Sent: Tuesday, December 11, 2012 4:11 PM
 To: Antoine Pitrou
 Cc: python-dev@python.org
 Subject: Re: [Python-Dev] Draft PEP for time zone support.
 
 On Tue, Dec 11, 2012 at 8:07 AM, Antoine Pitrou solip...@pitrou.net
 wrote:
  Le Tue, 11 Dec 2012 16:23:37 +0100,
  Lennart Regebro rege...@gmail.com a écrit :
 
  Changes in the ``datetime``-module
  --
 
  A new ``is_dst`` parameter is added to several of the `tzinfo`
  methods to handle time ambiguity during DST changeovers.
 
  * ``tzinfo.utcoffset(self, dt, is_dst=True)``
 
  * ``tzinfo.dst(self, dt, is_dst=True)``
 
  * ``tzinfo.tzname(self, dt, is_dst=True)``
 
  The ``is_dst`` parameter can be ``True`` (default), ``False``, or
  ``None``.
 
  ``True`` will specify that the given datetime should be interpreted
  as happening during daylight savings time, ie that the time
 specified
  is before the change from DST.
 
  Why is it True by default? Do we have statistics showing that Python
  gets more use in summer?
 
 My question exactly.

Summer in the USA, at least, is 238 days in 2012, while Winter into 2013 is 
only 126 days:

 import datetime
 datetime.date(2012, 11, 4) - datetime.date(2012, 3, 11)
datetime.timedelta(238)
 datetime.date(2013, 3, 10) - datetime.date(2012, 11, 4)
datetime.timedelta(126)


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Maintenance burden of str.swapcase

2011-09-12 Thread Robert Brewer
Glyph Lefkowitz wrote:
 On Sep 11, 2011, at 11:49 AM, Michael Foord wrote:
 Does anyone *actually* use .title() for this?
 
 Yes.  Twisted does, in various MIME-ish places (IMAP, SIP),
 although not in HTTP from what I can see.  I imagine other
 similar software would as well.

Not to mention it doesn't work for WWW-Authenticate or TE, to give just a 
couple of examples.


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3333: wsgi_string() function

2011-01-07 Thread Robert Brewer
P.J. Eby wrote:
 At 09:43 AM 1/7/2011 -0500, James Y Knight wrote:
 On Jan 7, 2011, at 6:51 AM, Victor Stinner wrote:
   I don't understand why you are attached to this horrible hack
   (bytes-in-unicode). It introduces more work and more confusing
than
   using raw bytes unchanged.
  
   It doesn't work and so something has to be changed.
 
 It's gross but it does work. This has been discussed ad-nausium on
 web-sig over a period of years.
 
 I'd like to reiterate that it is only even a potential issue for the
 PATH_INFO/SCRIPT_NAME keys. Those two keys are required to have been
 urldecoded already, into byte-data in some encoding. For all the
 other keys (including the ones from os.environ), they are either
 *properly* decoded in 8859-1 or are just ascii (possibly still
 urlencoded, so the app needs to urldecode and decode into a string
 with the correct encoding).
 
 Right.  Also, it should be mentioned that none of this would be
 necessary if we could've gotten a bytes of a known encoding
 type.  If you look back to the last big Python-Dev discussion on
 bytes/unicode and stdlib API breakage, this was the holdup for
 getting a sane WSGI spec.
 
 Since we couldn't change the language to fix the problem (due to the
 moratorium), we had to use this less-pleasant way of dealing with
 things, in order to get a final WSGI spec for Python 3.
 
 (If anybody is wondering about the specifics of the language change
 that was needed, it'd be having a bytes with known encoding type,
 that when combined in any polymorphic operation with a unicode
 string, would result in bytes-with-encoding output, and would raise
 an error if the resulting value could not be encoded in the target
 encoding.  Then we would simply do all WSGI header operations with
 this type, using latin-1 as the target encoding.)

Still looking forward to the day when that moratorium is lifted. Anyone
have any idea when that will be?


Bob
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 3333: wsgi_string() function

2011-01-07 Thread Robert Brewer
Paul Moore wrote:
 Robert Brewer fuman...@aminus.org wrote:
  P.J. Eby wrote:
   Also, it should be mentioned that none of this would be
   necessary if we could've gotten a bytes of a known encoding
   type.
 
  Still looking forward to the day when that moratorium is lifted.
  Anyone have any idea when that will be?
 
 See PEP 3003 (http://www.python.org/dev/peps/pep-3003/) - Python 3.3
 is expected to be post-moratorium.

This PEP proposes a temporary moratorium (suspension) of all changes to
the Python language syntax, semantics, and built-ins for a period of at
least two years from the release of Python 3.1.

Python 3.1 was released June 27th, 2009. We're coming up faster on the
two-year period than we seem to be on a revised WSGI spec. Maybe we
should shoot for a bytes of a known encoding type first.


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Proposal: make float.__str__ identical tofloat__repr__ in Python 3.2

2010-07-29 Thread Robert Brewer
Mark Dickinson wrote:
 Now that we've got the short float repr in Python, there's less value
 in having float.__str__ truncate to 12 significant digits (as it
 currently does).  For Python 3.2, I propose making float.__str__ use
 the same algorithm as float.__repr__ for its output (and similarly for
 complex).
 
 Apart from simplifying the internals a little bit, one nice feature of
 this change is that it removes the differences in formatting between
 printing a float and printing a container of floats:
 
  l = [1/3, 1/5, 1/7]
  print(l)
 [0., 0.2, 0.14285714285714285]
  print(l[0], l[1], l[2])
 0. 0.2 0.142857142857
 
 Any thoughts or comments on this?
 
 There's a working patch at http://bugs.python.org/issue9337

Python 2.5.4 (r254:67916, Jan 20 2010, 21:44:03) 
 float(0.142857142857) * 7
0.99981
 float(0.14285714285714285) * 7
1.0

I've made a number of tools in the past that needed to round-trip a
float through a string and back. I was under the impression that floats
needed 17 decimal digits to avoid losing precision. How does one do that
efficiently if neither str nor repr return 17 digits?


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] web apps in python 3

2009-08-31 Thread Robert Brewer
Chris Withers wrote:
 Robert Brewer wrote:
 you could switch to Python 3.1,
 I would love to, once Python 3 has a viable web app story...

 CherryPy 3.2 is now in beta, and mod_wsgi is nearly ready as well. Both
 support Python 3. :)
 
 My understanding was that the wsgi spec for Python 3 wasn't finished...

The WSGI 1.0 spec has always included Python 3 using unicode strings in the 
environ (decoded via ISO-8859-1, and limited to \x00-\xFF). In addition, the 
CherryPy and mod_wsgi teams are working to interoperably support a modified 
version of WSGI, in which the environ is true unicode for both Python 2 and 3. 
We hope these implementations become references from which a WSGI 1.1 spec can 
be written; since web-sig has not yet reached consensus on certain 
specification details, we are proceeding together with tools that allow users 
to get work done now.


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [Web-SIG] py3k, cgi, email, and form-data

2009-05-12 Thread Robert Brewer
Graham Dumpleton wrote:
 2009/5/12 Robert Brewer fuman...@aminus.org:
  There's a major change in functionality in the cgi module between
 Python
  2 and Python 3 which I've just run across: the behavior of
  FieldStorage.read_multi, specifically when an HTTP app accepts a file
  upload within a multipart/form-data payload.
 
  In Python 2, each part would be read in sequence within its own
  FieldStorage instance. This allowed file uploads to be shunted to a
  TemporaryFile (via make_file) as needed:
 
      klass = self.FieldStorageClass or self.__class__
      part = klass(self.fp, {}, ib,
   environ, keep_blank_values, strict_parsing)
      # Throw first part away
      while not part.done:
      headers = rfc822.Message(self.fp)
      part = klass(self.fp, headers, ib,
   environ, keep_blank_values, strict_parsing)
      self.list.append(part)
 
  In Python 3 (svn revision 72466), the whole request body is read into
  memory first via fp.read(), and then broken into separate parts in a
  second step:
 
      klass = self.FieldStorageClass or self.__class__
      parser = email.parser.FeedParser()
      # Create bogus content-type header for proper multipart parsing
      parser.feed('Content-Type: %s; boundary=%s\r\n\r\n' % (self.type,
 ib))
      parser.feed(self.fp.read())
      full_msg = parser.close()
      # Get subparts
      msgs = full_msg.get_payload()
      for msg in msgs:
      fp = StringIO(msg.get_payload())
      part = klass(fp, msg, ib, environ, keep_blank_values,
   strict_parsing)
      self.list.append(part)
 
  This makes the cgi module in Python 3 somewhat crippled for handling
  multipart/form-data file uploads of any significant size (and since
  the client is the one determining the size, opens a server up for an
  unexpected Denial of Service vector).
 
  I *think* the FeedParser is designed to accept incremental writes,
  but I haven't yet found a way to do any kind of incremental reads
  from it in order to shunt the fp.read out to a tempfile again.
  I'm secretly hoping Barry has a one-liner fix for this. ;)
 
 FWIW, Werkzeug gave up on 'cgi' module for form passing and implements
 its own.
 
 Not sure whether this issue in Python 3.0 was one of the reasons or
 not. I know one of the reasons was because cgi.FieldStorage is not
 WSGI 1.0 compliant. One of the main reasons that no one actually
 adheres to WSGI 1.0 is because of the 'cgi' module. This still hasn't
 been addressed by a proper amendment to WSGI 1.0 specification or a
 new WSGI 1.1 specification to allow a hint to readline().
 
 The Werkzeug form processing module is properly WSGI 1.0 compliant,
 meaning that Wekzeug is possibly the only major WSGI framework to be
 WSGI compliant.

FWIW, I just added a replacement for the cgi module to CherryPy over the 
weekend for the same reasons. It's in the python3 branch but will get 
backported to CherryPy 3.2 for Python 2.x.


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] py3k, cgi, email, and form-data

2009-05-11 Thread Robert Brewer
There's a major change in functionality in the cgi module between Python
2 and Python 3 which I've just run across: the behavior of
FieldStorage.read_multi, specifically when an HTTP app accepts a file
upload within a multipart/form-data payload.

In Python 2, each part would be read in sequence within its own
FieldStorage instance. This allowed file uploads to be shunted to a
TemporaryFile (via make_file) as needed:

klass = self.FieldStorageClass or self.__class__
part = klass(self.fp, {}, ib,
 environ, keep_blank_values, strict_parsing)
# Throw first part away
while not part.done:
headers = rfc822.Message(self.fp)
part = klass(self.fp, headers, ib,
 environ, keep_blank_values, strict_parsing)
self.list.append(part)

In Python 3 (svn revision 72466), the whole request body is read into
memory first via fp.read(), and then broken into separate parts in a
second step:

klass = self.FieldStorageClass or self.__class__
parser = email.parser.FeedParser()
# Create bogus content-type header for proper multipart parsing
parser.feed('Content-Type: %s; boundary=%s\r\n\r\n' % (self.type, ib))
parser.feed(self.fp.read())
full_msg = parser.close()
# Get subparts
msgs = full_msg.get_payload()
for msg in msgs:
fp = StringIO(msg.get_payload())
part = klass(fp, msg, ib, environ, keep_blank_values,
 strict_parsing)
self.list.append(part)

This makes the cgi module in Python 3 somewhat crippled for handling
multipart/form-data file uploads of any significant size (and since
the client is the one determining the size, opens a server up for an
unexpected Denial of Service vector).

I *think* the FeedParser is designed to accept incremental writes,
but I haven't yet found a way to do any kind of incremental reads
from it in order to shunt the fp.read out to a tempfile again.
I'm secretly hoping Barry has a one-liner fix for this. ;)


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] py3k, cgi, and form-data

2009-05-11 Thread Robert Brewer
There's a major change in functionality in the cgi module between Python
2 and Python 3 which I've just run across: the behavior of
FieldStorage.read_multi, specifically when an HTTP app accepts a file
upload within a multipart/form-data payload.

In Python 2, each part would be read in sequence within its own
FieldStorage instance. This allowed file uploads to be shunted to a
TemporaryFile (via make_file) as needed:

klass = self.FieldStorageClass or self.__class__
part = klass(self.fp, {}, ib,
 environ, keep_blank_values, strict_parsing)
# Throw first part away
while not part.done:
headers = rfc822.Message(self.fp)
part = klass(self.fp, headers, ib,
 environ, keep_blank_values, strict_parsing)
self.list.append(part)

In Python 3 (svn revision 72466), the whole request body is read into
memory first via fp.read(), and then broken into separate parts in a
second step:

klass = self.FieldStorageClass or self.__class__
parser = email.parser.FeedParser()
# Create bogus content-type header for proper multipart parsing
parser.feed('Content-Type: %s; boundary=%s\r\n\r\n' % (self.type, ib))
parser.feed(self.fp.read())
full_msg = parser.close()
# Get subparts
msgs = full_msg.get_payload()
for msg in msgs:
fp = StringIO(msg.get_payload())
part = klass(fp, msg, ib, environ, keep_blank_values,
 strict_parsing)
self.list.append(part)

This makes the cgi module in Python 3 somewhat crippled for handling
multipart/form-data file uploads of any significant size (and since
the client is the one determining the size, opens a server up for an
unexpected Denial of Service vector).

I *think* the FeedParser is designed to accept incremental writes,
but I haven't yet found a way to do any kind of incremental reads
from it in order to shunt the fp.read out to a tempfile again.
I'm secretly hoping Barry has a one-liner fix for this. ;)


Robert Brewer
fuman...@aminus.org
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] RFC: Threading-Aware Profiler for Python

2009-05-04 Thread Robert Brewer
Christian Schubert wrote:
  I've created an alternative profiler module which queries per-thread
  CPU usage via netlink/taskstats, which limits the applicability to
  Linux (which shouldn't be much of an issue, profiling is usually not
  done by end users).

One of the uses for a profiling module is to compare runs on various 
platforms. And please, stop perpetuating the myth that only end-users 
use anything but Linux.


Robert Brewer
fuman...@aminus.org



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dropping bytes support in json

2009-04-10 Thread Robert Brewer
On Thu, 2009-04-09 at 22:38 -0400, Barry Warsaw wrote:
 On Apr 9, 2009, at 11:55 AM, Daniel Stutzbach wrote:
 
  On Thu, Apr 9, 2009 at 6:01 AM, Barry Warsaw ba...@python.org wrote:
  Anyway, aside from that decision, I haven't come up with an elegant  
  way to allow /output/ in both bytes and strings (input is I think  
  theoretically easier by sniffing the arguments).
 
  Won't this work? (assuming dumps() always returns a string)
 
  def dumpb(obj, encoding='utf-8', *args, **kw):
  s = dumps(obj, *args, **kw)
  return s.encode(encoding)
 
 So, what I'm really asking is this.  Let's say you agree that there  
 are use cases for accessing a header value as either the raw encoded  
 bytes or the decoded unicode.  What should this return:
 
   message['Subject']
 
 The raw bytes or the decoded unicode?
 
 Okay, so you've picked one.  Now how do you spell the other way?
 
 The Message class probably has these explicit methods:
 
   Message.get_header_bytes('Subject')
   Message.get_header_string('Subject')
 
 (or better names... it's late and I'm tired ;).  One of those maps to  
 message['Subject'] but which is the more obvious choice?
 
 Now, setting headers.  Sometimes you have some unicode thing and  
 sometimes you have some bytes.  You need to end up with bytes in the  
 ASCII range and you'd like to leave the header value unencoded if so.   
 But in both cases, you might have bytes or characters outside that  
 range, so you need an explicit encoding, defaulting to utf-8 probably.
 
   Message.set_header('Subject', 'Some text', encoding='utf-8')
   Message.set_header('Subject', b'Some bytes')
 
 One of those maps to
 
   message['Subject'] = ???
 
 I'm open to any suggestions here!

Syntactically, there's no sense in providing:

Message.set_header('Subject', 'Some text', encoding='utf-16')

...since you could more clearly write the same as:

Message.set_header('Subject', 'Some text'.encode('utf-16'))

The only interesting case is if you provided a *default* encoding, so that:

Message.default_header_encoding = 'utf-16'
Message.set_header('Subject', 'Some text')

...has the same effect.

But it would be far easier to do all the encoding at once in an output()
or serialize() method. Do different headers need different encodings? If
so, make message['Subject'] a subclass of str and give it an .encoding
attribute (with a default). If not, Message.header_encoding should be
sufficient.


Robert Brewer
fuman...@aminus.org

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] http://bugs.python.org/issue3628

2009-01-09 Thread Robert Brewer
Glenn Linderman wrote:
 I'm getting an error similar to that in
 http://bugs.python.org/issue3628
 when I try to run python2.6 and cherrypy 3.1.1.
 
 I'm too new to see any connection between the symptom and the cure
 described in the above issue... I'd guess that somehow threads imply
an
 extra parameter?
 
 It also seems that the SetDaemon call simply does what the replacement
 code does, so I don't understand how the fix fixes anything, much less
 how it fixes a parameter count in a seemingly unrelated function.
 
 In any case, the issue is against 3.0, where it claims to be fixed.  I
 don't know enough about the tracker to find if it was fixed in 2.6
 concurrently, but the symptom appears there.
 
 I tried hacking all the references I could find to XXX.SetDaemon(True)
 to XXX.daemon = True but it didn't seem to help.

Fixed in http://www.cherrypy.org/changeset/2096.


Robert Brewer
fuman...@aminus.org

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] subprocess insufficiently platform-independent?

2008-08-25 Thread Robert Brewer
Guido van Rossum wrote:
 Several people at Google seem to have independently discovered that
 despite all of the platform-independent goodness in subprocess.py, you
 still need to be platform aware.

I can verify this. For CP we went back to using spawnl, but in an
internal project we checked sys.platform and set shell=True for Windows.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] A proposed solution for Issue 502236: Asyncrhonousexceptions between threads

2008-07-12 Thread Robert Brewer
Josiah Carlson wrote:
 This doesn't need to be an interpreter thing; it's easy to implement
 by the user (I've done it about a dozen times using a single global
 flag).  If you want it to be automatic, it's even possible to make it
 happen automatically using sys.settrace() and friends (you can even
 make it reasonably fast if you use a C callback).

Agreed. If someone wants a small library to help do this, especially in
web servers, the latest version of Cherrpy includes a 'process'
subpackage under a generous license. It does all the things Andy
describes via a Bus object:

 Andy Scott wrote:
  1. Put in place a new function call sys.exitapplication, what this
  would do is:
   a. Mark a flag in t0's data structure saying a request to
  shutdown has been made

This is bus.exit(), which publishes a 'stop' message to all subscribed
'stop' listeners, and then an 'exit' message to any 'exit' listeners.

   b. Raise a new exception, SystemShuttingDown, in t1.

That's up to the listener.

   2. As the main interpreter executes it checks the shutting down
  flag in the per thread data and follows one of two paths:
  If it is t0:
   a. Stops execution of the current code sequence
   b. Iterates over all extant threads ...
   c. Enters a timed wait loop where it will allow the other
  threads time to see the signal. It will iterate this loop
  a set number of times to avoid being blocked on any given
  thread.

This is implemented as [t.join() for t in threading.enumerate()] in the
main thread.

   d. When all threads have exited, or been forcefully closed,
  raise the SystemShuttingDown exception

The bus just lets the main thread exit at this point.

  P1. If the thread is in a tight loop will it see the exception? Or
  more generally: when should the exception be raised?

That's dependent enough on what work the thread is doing that a
completely generic approach is generally not sufficient. Therefore, the
process.bus sends a 'stop' message, and leaves the implementation of the
receiver up to the author of that thread's logic. Presumably, one
wouldn't register a listener for the 'stop' message unless one knew how
to actually stop.

  P2. When should the interpreter check this flag?
 
  I think the answer to both of these problems is to check the flag,
  and hence raise the exception, in the following circumstances:
- When the interpreter executes a back loop. So this should catch
  the jump back to the top of a while True: loop
- Just before the interpreter makes a call to a hooked in non-
  Python system function, e.g. file I/O, networking c.

This is indeed how most well-written apps do it already.

  Checking at these points should be the minimal required, I think, to
  ensure that a given thread can not ignore the exception. It may be
  possible, or even required, to perform the check every time a Python
  function call is made.

PLEASE don't make Python function calls slower.

   1. The Python interpreter has per thread information.
   2. The Python interpreter can tell if the system, t0, thread is
  running.
   3. The Python engine has (or can easily obtain) a list of all
  threads it created.
   4. It is possible to raise exceptions as the byte code is
executing.

Replace 'Python interpreter' with 'your application' and those become
relatively simple architectural issues: maintain a list of threads, have
them expose an interface to determine if they're running, and make them
monitor a flag to know when another thread is asking them to stop.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] urllib unicode handling

2008-05-07 Thread Robert Brewer
Martin v. Löwis wrote:
 The proper way to implement this would be IRIs (RFC 3987),
 in particular section 3.1. This is not as simple as just
 encoding it as UTF-8, as you might have to apply IDNA to
 the host part.
 
 Code doing so just hasn't been contributed yet.

But if someone wanted to do so, it's pretty simple:

 u'www.\u212bngstr\xf6m.com'.encode(idna)
'www.xn--ngstrm-hua5l.com'


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] sock.close() not closing?

2008-05-07 Thread Robert Brewer
Sjoerd Mullender wrote:
 On 2008-05-07 13:37, Amaury Forgeot d'Arc wrote:
  2008/5/7 Sjoerd Mullender [EMAIL PROTECTED]:
  I would expect that a system call is done to actually close the
  socket and free the file descriptor.  But that does not happen.
 
  It does close the socket:
 
  In socket.py, when self._sock is replaced, its __del__ method will
be
  called.
 
 I have to question the design of this.  When I close() an object I
 expect it to be closed there and then and not at some indeterminate
 later time (well, it is determinate when you're fully aware of all
 references, but often you aren't--trust me, I understand reference
 counting).

Even if you're fully aware of all references, it's indeterminate in
multithreaded apps. I've just taken to doing:

self.socket._sock.close()
self.socket.close()

...in order to send the FIN I wanted ASAP.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 8: Discourage named lambdas?

2008-05-03 Thread Robert Brewer
Samuele Pedroni wrote:
 I found only an example in my personal recent code:
 
 START = !-- *db-payload-start* --
 END = !-- *db-payload-end* --
 TITLEPATTERN = lambda s: title%s/title % s
 
 this three are later used in a very few .find() and .replace()
 expressions in the same module. I suppose my point is that while
 I agree it should be discouraged and is really silly to do it
 for the few chars gain, it can be used to some effect in very
 rare cases.

i.e. anywhere you need a portable expression (as opposed to a portable
set of statements). I have a feeling that if it were named 'expr'
instead of 'lambda' we wouldn't be having this discussion.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Wow, I think I actually *get* it now!

2008-03-20 Thread Robert Brewer
Phillip J. Eby wrote:
 Hm.  So it seems to me that maybe one thing that would help is a
 Setuptools Haters' Guide To Setuptools -- that is, *short*
 documentation specifically written for people who don't want to use
 setuptools and want to minimize its impact on their systems.  I could
 probably write something like that fairly easily, now that I have
 some idea of what to go in it, more than, the existing documentation
 sucks.  :)
 
 Can I count on some non-assimilated persons' help in critiquing such
 a document and suggesting any topics I miss?

I'd be glad to help critique such a doc.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)

2008-03-20 Thread Robert Brewer
Phillip J. Eby wrote:
 The other tool that would be handy to have, would be one that unpacks
 eggs into standard distutils-style installation.

Hear, hear. I'm an author of a couple libraries that need to
interoperate with others. Of the many eggs I've downloaded over the past
year, I'd say 80%+ are never installed or even built--I just want to
grep the source code, and using my preferred tools, not some lame Find
command in a ZIP browser menu.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)

2008-03-20 Thread Robert Brewer
Janzert wrote:
 Since there seems to be a fair number of negative responses to
 setuptools, I just wanted to add a bit of positive counterbalance. I'm
 just a random python user that happens to track python-dev a bit, so
 take all this with the realization that I probably shouldn't have much
 input into anything. ;)
 
 I've been using python for somewhere around 10 years to write various
 random small scripts, gui applications and recently web applications.
 For me setuptools is the best thing to happen to python since I've
been
 using it. I develop and deploy on a seemingly constantly changing mix
 of
 various flavors of windows and linux. Unlike for others, I love that
 once I get setuptools installed I can just use the same commands to
get
 the things I need. I guess the contrast for me is that python is the
 common base that I tend to work from not the underlying OS.
 
 So I don't know if I'm part of a large number of quiet users or just
 happen to be an odd case that works really well with setuptools. I was
 disappointed when setuptools didn't make it into 2.5 and I really hope
 it or something very much like it can make it into a release in the
 near future. Because while setuptools certainly isn't perfect, for me
 at least, it is much, much better than nothing at all.

My interpretation of this is that setuptools suffers from the same
malaise all flexible apps do (but especially CLI apps it seems):
frequent users love the power and high volume of options, infrequent
users despise it. If you're installing apps all day, you probably use it
a lot more often than library devs like me who use it once every other
month (if we're forced to).


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?

2008-03-19 Thread Robert Brewer
Martin v. Löwis wrote:
 I don't see the need to for PyPI. For packages (or distributions,
 to avoid confusion with Python packages), I see two options:
 
 a) provide a single release that supports both 2.x and 3.x.
 The precise strategy to do so might vary. If one is going
 for a single source version, have setup.py run 2to3
 (or perhaps 3to2). For dual-source packages, have setup.py
 just install the one for the right version; setup.py itself
 needs to be written so it runs on both versions (which is
 easy to do).
 b) switch to Python 3 at some point (i.e. burn your bridges).
 
 You seem to be implying that some projects may release separate
 source distributions. I cannot imagine why somebody would want
 to do that.

That's odd. I can't imagine why anybody would *not* want to do that. Given the 
number of issues 2to3 can't fix (because it would be too dangerous to guess), I 
certainly can't imagine a just-in-time porting solution that would work 
reliably. Making two releases means I can migrate once and only once and be 
done with it. Making a single release work on 2.x and 3.x means I have to keep 
all of the details of both Python 2 and 3 in my head all the time as I code? 
not to mention litter my codebase with # the following ugly hack lets us work 
with Python 2 and 3 comments so someone else doesn't undo all my hard work 
when they run the tests on Python 3 but not 2? No thanks. My brain is too small.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backporting PEP 3127 to trunk

2008-02-22 Thread Robert Brewer
Raymond Hettinger wrote:
 I thought the whole point of 3.0 was a recognition that all that
 doubling-up was a bad thing and to be rid of it.  Why make the
 situation worse?  ISTM that we need two versions of oct() like
 we need a hole in the head.  Heck, there's potentially a case to be
 made that we don't need oct() at all.  IIRC, unix permissions like
 0666 were the only use case that surfaced.

Postgres bytea coercion is a frequent use case for oct() in my world.
But I agree we don't need two versions.


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Backporting PEP 3127 to trunk

2008-02-22 Thread Robert Brewer
Eric Smith wrote:
 Robert Brewer wrote:
  Raymond Hettinger wrote:
  I thought the whole point of 3.0 was a recognition that all that
  doubling-up was a bad thing and to be rid of it.  Why make the
  situation worse?  ISTM that we need two versions of oct() like
  we need a hole in the head.  Heck, there's potentially a case to be
  made that we don't need oct() at all.  IIRC, unix permissions like
  0666 were the only use case that surfaced.
 
  Postgres bytea coercion is a frequent use case for oct() in my
world.
  But I agree we don't need two versions.
 
 Unless you're trying to write code to work with both 2.6 and 3.0.

Who would try that when PEP 3000 says (in bold type no less):

There is no requirement that Python 2.6 code will run unmodified
on Python 3.0. Not even a subset.

? And why should python-dev support such people?


Robert Brewer
[EMAIL PROTECTED]

___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Monkeypatching idioms -- elegant or ugly?

2008-01-15 Thread Robert Brewer
Guido van Rossum:
 I ran into the need of monkeypatching a large number of classes (for
 what I think are very good reasons :-) and invented two new recipes.
 These don't depend on Py3k, and the second one actually works all the
 way back to Python 2.2. Neither of these allows monkeypatching
 built-in types like list. If you don't know what monkeypatching is,
 see see http://en.wikipedia.org/wiki/Monkey_patch.
 
 I think it's useful to share these recipes, if only to to establish
 whether they have been discovered before, or to decide whether they
 are worthy of a place in the standard library. I didn't find any
 relevant hits on the ASPN Python cookbook.
 
 First, a decorator to add a single method to an existing class:
 
 def monkeypatch_method(cls):
 def decorator(func):
 setattr(cls, func.__name__, func)
 return func
 return decorator
 
 To use:
 
 from somewhere import someclass
 
 @monkeypatch_method(someclass)
 def newmethod(self, args):
 return whatever
 
 This adds newmethod to someclass

I like it, but my first thought was, and if that method already
exists? I'd extend monkeypatch_method to store a reference to the old
method(s):

def monkeypatch_method(cls):
Add the decorated method to the given class; replace as
needed.

If the named method already exists on the given class, it will
be replaced, and a reference to the old method appended to a
list
at cls._old_name. If the _old_name attribute already
exists
and is not a list, KeyError is raised.

def decorator(func):
fname = func.__name__

old_func = getattr(cls, fname, None)
if old_func is not None:
# Add the old func to a list of old funcs.
old_ref = _old_%s % fname
old_funcs = getattr(cls, old_ref, None)
if old_funcs is None:
setattr(cls, old_ref, [])
elif not isinstance(old_funcs, list):
raise KeyError(%s.%s already exists. %
   (cls.__name__, old_ref))
getattr(cls, old_ref).append(old_func)

setattr(cls, fname, func)
return func
return decorator

I chose a renaming scheme somewhat at random. The list allows you (or
someone else ;) to call monkeypatch repeatedly on the same cls.method
(but it's not thread-safe).

And although it might seem to be making monkeypatches easier to perform,
at least it's very explicit about what's going on as long as you keep
monkeypatch in the name.


Robert Brewer
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] context manager - generator interaction?

2007-04-05 Thread Robert Brewer
Guido van Rossum wrote:
 Isn't this violating the rule that a try/except should only enclose
 the smallest expression where the exception is expected?

Yeah, and I keep finding myself wanting to hyperlink to that rule in
the official docs, but it only seems to be written down in developer's
heads. Can we get that into the Language Ref somewhere? Maybe on the
http://docs.python.org/ref/try.html page?


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] I vote to reject: Adding timeout tosocket.pyand httplib.py.

2007-03-21 Thread Robert Brewer
Facundo Batista wrote:
 It works, but my only issue is that it gets ugly in the help():
 
  sentinel = object()
  def f(a, b=sentinel):
 ... pass
 ...
  help(f)
   ...
   f(a, b=object object at 0xb7d64460)
 
 I know I can use a class with __str__ instead of object, but 
 what would one print in that case?

I've found missing to be the most common (and the most understandable)
thing to print in that case.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime module enhancements

2007-03-11 Thread Robert Brewer
Jon Ribbens wrote:
  Robert Brewer [EMAIL PROTECTED] wrote:
   One solution that just occurred to me -- and that
   skirts the issue of choosing an interpretation --
   is that, when comparing date and datetime objects,
   the datetime's .date() method is called and the
   result of that call is compared to the original
   date.
  +1
  
  ...and it's a decision that can be made independently
  of how to add or subtract dates.
 
 I don't like it. It sounds suspiciously like when
 comparing integers and floats, discard the non-integer
 portion of the float and then compare as integers.

...which can happen quite often, depending on your domain's requirements for 
significant digits. Integers and floats are numbers, not units, so a more 
proper analogy would have been, it sounds like when comparing feet and inches, 
divide inches by 12 and discard any remainder. Which, again, can happen quite 
often.

But even feet and inches aren't a good analogy, because people don't say, 
six-foot-three is six feet. But they *do* say, is it Thursday right now? 
quite often, and expect a yes-or-no answer, not cannot compute. A slightly 
better analogy would be armies and platoons: when you compare armies (on almost 
any scale except size), you can't say an army is more or less than a platoon in 
that same army. The platoon is (part of) the army and the army is (made of) the 
platoons. So order is important only at a given level of granularity:

  Army1 = [P1, P2, P3]
  Army2 = [Px, Py, Pz]
  sorted([Army1, Army2, P1, P2, P3, Px, Py, Pz])
  [Army1, P1, P2, P3, Px, Py, Army2, Pz]

...that is, it doesn't matter where Army2 appears, as long as it's in the 
second half of the list. Note that the platoons can still be ordered relative 
to other platoons in the same army.

Likewise, when comparing dates to date-times (but not when adding them), the 
mental model of dates and times acts more like nested sets than continuous 
functions:

  Date1 = Datetimes(1, 2, 3)
  Date2 = Datetimes(x, y, z)
  sorted([Date1, Date2, ...])
  [Date1, T1, T2, T3, Tx, Ty, Date2, Tz]

...and the above can be achieved by:

class nested_datetime(datetime):
def __cmp__(self, other):
if isinstance(other, datetime):
return datetime.__cmp__(self, other)
elif isinstance(other, date):
return cmp(self.date(), other)
raise TypeError


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime module enhancements

2007-03-10 Thread Robert Brewer
On 3/9/07, Collin Winter [EMAIL PROTECTED] wrote:
  On the subject of datetime enhancements, I came
  across an SF patch (#1673403) the other day that
  proposed making it possible to compare date and
  datetime objects.
 
 One solution that just occurred to me -- and that
 skirts the issue of choosing an interpretation --
 is that, when comparing date and datetime objects,
 the datetime's .date() method is called and the
 result of that call is compared to the original
 date. That is,
 
 datetime_obj  date_obj
 
 is implicitly equivalent to
 
 datetime_obj.date()  date_obj
 
 Seems a ready-made use case for that method.

+1

...and it's a decision that can be made independently
of how to add or subtract dates.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime module enhancements

2007-03-09 Thread Robert Brewer
Brett Cannon wrote:
 On 3/9/07, Collin Winter [EMAIL PROTECTED] wrote:
  On the subject of datetime enhancements, I came across an SF patch
  (#1673403) the other day that proposed making it possible to compare
  date and datetime objects...
 
 I personally like the current solution.  The proposal to just assume
 midnight goes against EIBTI in my mind.

Yeah, but the current solution goes against, um, APBP*. Not in my mind,
but in my code. Repeatedly. I can't count how many times I've written
code like:

if created  fdate:

when I should have written:

if isinstance(created, datetime.date):
date_version_of_created = created
else:
date_version_of_created = created.date()
if date_version_of_created  fdate:

But it gets better, because:

 isinstance(datetime.datetime(x, y, z), datetime.date)
True

So the above won't work, you must remember to reverse the if/else:

if isinstance(created, datetime.datetime):
date_version_of_created = created.date()
else:
date_version_of_created = created
if date_version_of_created  fdate:

That's at least one too many must remembers for dumb (and busy!) ol'
me. EIBTI until it's a PITA.

Is an implicit time of 0 really so surprising? It doesn't seem to be
surprising for the datetime constructor:

 datetime.datetime(2007, 3, 9)
datetime.datetime(2007, 3, 9, 0, 0)

Why should it be surprising for comparisons?


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]

* APBP = Although, practicality beats purity
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] pysqlite for 2.5?

2006-03-29 Thread Robert Brewer
Martin v. Löwis wrote:
 Guido van Rossum wrote:
  Unless you've recanted on that already, let me point out that I've
  never seen sqlite, and I've ignored this thread, so I don't 
 know what
  the disagreement is all about. Perhaps one person in favor and one
  person against could summarize the argument for me? Otherwise I'll
  have to go with no just to err on the side of safety. I 
 have strong
  feelings about the language. Sometimes I have strong feelings about
  the library. This doesn't seem to be one of those cases though...
 
 Let me try to take both sides simultaneously:
 
 For: would add an SQL library to the standard distribution, and one
 that doesn't depend on additional infrastructure on the target machine
 (such as an existing database server); the author of that library is
 fine with including it in Python
 
 Against: Adds work-load on the release process, adding more libraries
 to the already-large list of new libraries for 2.5. Choice of naming
 things is ad-hoc, but gets cast in stone by the release; likewise,
 choice of specific SQL library might inhibit addition of different
 libraries later.

More Against?:
Explaining database is locked errors (due to SQLite's exposed 
multiple-readers/one-writer design) on a daily basis (FAQ entries 
notwithstanding).


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] conditional expressions - add parens?

2006-03-07 Thread Robert Brewer
Greg Ewing wrote:
 Jeremy Hylton wrote:
  Perhaps the solution
  is to require parens around all expressions, a simple 
 consistent rule.
 
 I actually designed a language with that feature once.
 It was an exercise in minimality, with hardly anything
 built-in -- all the arithmetic operators, etc. were
 defined in the language.
 
 A result was that there was no built-in notion of
 precedence, and my solution was to require parentheses
 around every infix operation. So instead of
 
dsq = b * b - 4 * a * c
 
 you would have had to write
 
dsq = ((b * b) - ((4 * a) * c))
 
 I never got an implementation working well enough
 to find out how much of a disaster this would
 have been to use, though. :-)

I already do that anyway, and even update other people's code in any
open-source projects I contribute to, because I find it *far* easier to
read and write 'unnecessary' parens than remember precedence rules. But
I can understand why some people would balk at it, so +0.5 from me. ;)


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Unifying trace and profile

2006-02-21 Thread Robert Brewer
There are a number of features I'd like to see happen with Python's
tracing and profiling subsystems (but I don't have the C experience to
do it myself). I started to write an SF feature-request and then
realized it was too much for a single ticket. Maybe a PEP? All of these
would be make my latest side project[1] a lot easier.

Anyway, here they are (most important and easiest-to-implement first):


1. Allow trace hooks to receive c_call, c_return, and c_exception events
(like profile does).

2. Allow profile hooks to receive line events (like trace does).

3. Expose new sys.gettrace() and getprofile() methods, so trace and
profile functions that want to play nice can call
sys.settrace/setprofile(None) only if they are the current hook.

4. Make the same move that sys.exitfunc - atexit made (from a single
function to multiple functions via registration), so multiple
tracers/profilers can play nice together.

5. Allow the core to filter on the event arg before hook(frame, event,
arg) is called.

6. Unify tracing and profiling, which would remove a lot of redundant
code in ceval and sysmodule and free up some space in the PyThreadState
struct to boot.

7. As if the above isn't enough of a dream, it would be nice to have a
bytecode tracer, which didn't bother with the f_lineno logic in
maybe_call_line_trace, but just called the hook on every instruction.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]

[1] PyConquer, a trace hook to help understand and debug concurrent
(threaded) code. http://projects.amor.org/misc/wiki/PyConquer
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] threadsafe patch for asynchat

2006-02-08 Thread Robert Brewer
Barry Warsaw wrote:
 On Tue, 2006-02-07 at 16:01 -0800, Robert Brewer wrote:
 
  Perhaps, but please keep in mind that the smtpd module uses 
  both, currently, and would have to be rewritten if either is 
  removed.
 
 Would that really be a huge loss?

It'd be a huge loss for the random fellow who needs to write an email
fixup proxy between a broken client and Exim in a couple of hours. ;)
But I can't speak for how often this need comes up among users.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Let's just *keep* lambda

2006-02-08 Thread Robert Brewer
Raymond Hettinger wrote:
  How about (lambda x,y: x**y)?
 
 The purpose of this thread was to conserve brain-power by 
 bringing the issue to a close.  Instead, it is turning into
 syntax/renaming fest.  May I suggest that this be moved to
 comp.lang.python and return only if a community consensus
 emerges from the thousands of random variants?

I'd like to suggest this be moved to comp.lang.python and never return.
Community consensus on syntax is a pipe dream.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] threadsafe patch for asynchat

2006-02-07 Thread Robert Brewer
Guido van Rossum wrote:
 IMO asynchat and asyncore are braindead. The should really be removed
 from the standard library. The code is 10 years old and represents at
 least 10-year-old thinking about how to do this. The amount of hackery
 in Zope related to asyncore was outrageous -- basically most of
 asyncore's guts were replaced with more advanced Zope code, but the
 API was maintained for compatibility reasons. A nightmare.

Perhaps, but please keep in mind that the smtpd module uses both, currently, 
and would have to be rewritten if either is removed.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Keep default comparisons - or add a second set?

2005-12-28 Thread Robert Brewer
Title: RE: [Python-Dev] Keep default comparisons - or add a second set?






Noam Raphael wrote:
 I don't think that every type that supports equality
 comparison should support order comparison. I think
 that if there's no meaningful comparison (whether
 equality or order), an exception should be raised.

Just to keep myself sane...

 def date_range(start=None, end=None):
 if start == None:
 start = datetime.date.today()
 if end == None:
 end = datetime.date.today()
 return end - start

Are you saying the if statements will raise TypeError if start or end are dates? That would be a sad day for Python. Perhaps you're saying that there is a meaningful comparison between None and anything else, but please clarify if so.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]



___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Adding a conditional expression in Py3.0

2005-09-30 Thread Robert Brewer
Guido:
 Now you've pushed me over the edge. I've made up my mind 
 now, X if C else Y it will be. I hope to find time to
 implement it in Python 2.5.

Steve Holden:
 Not before time, either. If this ends the discussion then 
 I'll consider I've done everyone a favour. Sometimes no
 decision is worse than the wrong decision ;-).

Exactly how I felt about bringing the decorator decision to a close. ;)
Congratulations to you both!


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 8: exception style

2005-08-06 Thread Robert Brewer
A.M. Kuchling wrote:
 PEP 8 doesn't express any preference between the 
 two forms of raise statements:
 raise ValueError, 'blah'
 raise ValueError(blah)
 
 I like the second form better, because if the exception arguments are
 long or include string formatting, you don't need to use line
 continuation characters because of the containing parens.  Grepping
 through the library code, the first form is in the majority, used
 roughly 60% of the time.
 
 Should PEP 8 take a position on this?  If yes, which one?

I like the second form better, because even intermediate Pythonistas
sometimes make a mistake between:

raise ValueError, A

and

raise (ValueError, A)

I'd like to see the first form removed in Python 3k, to help reduce the
ambiguity. But PEP 8 taking a stand on it would be a good start for now.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Request for dev permissions

2005-05-17 Thread Robert Brewer
Reinhold Birkenfeld wrote:
 would anybody mind if I was given permissions on the tracker 
 and CVS, for fixing small
 things like bug #1202475. I feel that I can help you others 
 out a bit with this and
 I promise I won't change the interpreter to accept braces...


I made a patch for that one the next day, by the way. #1203094


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]

P.S. Do you have a valid email address, RB? I wasn't able to fix up your
nospam address by hand.
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 343 - Abstract Block Redux

2005-05-13 Thread Robert Brewer
Guido van Rossum wrote:
 I've written up the specs for my PEP 340 redux proposal as a
 separate PEP, PEP 343.
 
 http://python.org/peps/pep-0343.html
 
 Those who have been following the thread Merging PEP 310 and PEP
 340-redux? will recognize my proposal in that thread, which received
 mostly positive responses there.
 
 Please review and ask for clarifications of anything that's unclear.

There's a typo in the code snippets at the moment.

The translation of the above statement is:

abc = EXPR
exc = ()  # Or (None, None, None) ?
try:
try:
VAR = abc.__enter__()
BLOCK
except:
exc = sys.exc_info()
raise
finally:
abc.__exit__(exc)

I think you meant abc.__exit__(*exc). Assuming that, then exc =
(None, None, None) makes the most sense. If exc_info() is going to be
passed as a single arg, then I'd rather have the default exc = (), so
I can simply check if exc: in the __exit__ method.


Robert Brewer
System Architect
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] Anonymous blocks: Thunks or iterators?

2005-04-29 Thread Robert Brewer
 [Greg Ewing]
  Elegant as the idea behind PEP 340 is, I can't shake
  the feeling that it's an abuse of generators. It seems
  to go to a lot of trouble and complication so you
  can write a generator and pretend it's a function
  taking a block argument.

[Guido]
 Maybe. You're not the first one saying this and I'm not saying no
 outright, but I'd like to defend the PEP.
 
 There are a number of separate ideas that all contribute to PEP 340.
 One is turning generators into more general coroutines: continue EXPR
 passes the expression to the iterator's next() method (renamed to
 __next__() to work around a compatibility issue and because it should
 have been called that in the first place), and in a generator this
 value can be received as the return value of yield. Incidentally this
 makes the generator *syntax* more similar to Ruby (even though Ruby
 uses thunks, and consequently uses return instead of continue to pass
 a value back). I'd like to have this even if I don't get the block
 statement.

Completely agree. Maybe we should have PEP 340 push just that, and make
a PEP 341 independently for resource-cleanup (which assumes 340)?

 [snip]
 
 The other problem with thunks is that once we think of them as the
 anonymous functions they are, we're pretty much forced to say that a
 return statement in a thunk returns from the thunk rather than from
 the containing function. Doing it any other way would cause major
 weirdness when the thunk were to survive its containing function as a
 closure (perhaps continuations would help, but I'm not about to go
 there :-).
 
 But then an IMO important use case for the resource cleanup template
 pattern is lost. I routinely write code like this:
 
 def findSomething(self, key, default=None):
 self.lock.acquire()
 try:
  for item in self.elements:
  if item.matches(key):
  return item
  return default
 finally:
self.lock.release()
 
 and I'd be bummed if I couldn't write this as
 
 def findSomething(self, key, default=None):
 block synchronized(self.lock):
  for item in self.elements:
  if item.matches(key):
  return item
  return default

Okay, you've convinced me. The only way I can think of to get the effect
I've been wanting would be to recompile the template function every time
that it's executed with a different block. Call it a Python
_re_processor ;). Although you could memoize the the resultant
bytecode, etc., it would still be pretty slow, and you wouldn't be able
to alter (rebind) the thunk once you'd entered the caller. Even then,
you'd have the cell issues you mentioned, trying to push values from the
thunk's original scope. Bah. It's so tempting on the semantic level, but
the implementation's a bear.


Robert Brewer
MIS
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Re: scope-collapse (was: anonymous blocks)

2005-04-26 Thread Robert Brewer
[Jim Jewett]
 (2)  Add a way to say Make this function I'm calling 
 use *my* locals and globals.  This seems to meet all
 the agreed-upon-as-good use cases, but there is disagreement
 over how to sensibly write it.  The calling function is
 the place that could get surprised, but people who want
 thunks seem to want the specialness in the 
 called function.

[Guido]
 I think there are several problems with this. First, it looks
 difficult to provide semantics that cover all the corners for the
 blending of two namespaces. What happens to names that have a
 different meaning in each scope?

[Jim]
 Programming error.  Same name == same object.

[Guido]
 Sounds like a recipe for bugs to me. At the very least it is a total
 breach of abstraction, which is the fundamental basis of the
 relationship between caller and callee in normal circumstances. The
 more I understand your proposal the less I like it.

[Jim]
 If a function is using one of _your_ names for something 
 incompatible, then don't call that function with collapsed
 scope.  The same problem happens with globals today.
 Code in module X can break if module Y replaces (not shadows,
 replaces) a builtin with an incompatible object.
 
 Except ...
 (E.g. 'self' when calling a method of
 another object; or any other name clash.)
 
 The first argument of a method *might* be a special case.  It seems
 wrong to unbind a bound method.  On the other hand, resource
 managers may well want to use unbound methods for the called
 code.

Urg. Please, no. If you're going to blend scopes, the callee should have
nothing passed to it. Why would you possibly want it when you already
have access to both scopes which are to be blended?


[Guido]
 Are the globals also blended?  How?

[Jim]
 Yes.  The callee does not even get to see its normal namespace.
 Therefore, the callee does not get to use its normal name 
 resolution.

[Guido]
 Another breach of abstraction: if a callee wants to use an imported
 module, the import should be present in the caller, not in the callee.

Yes, although if a callee wants to use a module that has not been
imported by the caller, it should be able to do so with a new import
statement (which then affects the namespace of the caller).

[Guido again]
 It really strikes me as an endless source of errors that these
 blended-scope callees (in your proposal) are ordinary
 functions/methods, which means that they can *also* be called without
 blending scopes. Having special syntax to define a callee intended for
 scope-blending seems much more appropriate (even if there's also
 special syntax at the call site).

Agreed. They shouldn't be ordinary functions at all, in my mind. That
means one can also mark the actual call on the callee side, instead of
the caller side; in other words, you wouldn't need a collapse keyword
at all if you formed the callee with a defmacro or other (better ;)
keyword. I guess if y'all find it surprising, you could keep collapse.


[Jim]
 If the name normally resolves in locals (often inlined to a 
 tuple, today),
 it looks in the shared scope, which is owned by the 
 caller.  This is
 different from a free variable only because the callee can 
 write to this
 dictionary.

[Guido]
 Aha! This suggests that a blend-callee needs to use different bytecode
 to avoid doing lookups in the tuple of optimized locals, since the
 indices assigned to locals in the callee and the caller won't match up
 except by miracle.

[Guido]
 Third, I expect that if we solve the first two
 problems, we'll still find that for an efficient implementation we
 need to modify the bytecode of the called function.

[Jim]
 Absolutely.  Even giving up the XXX_FAST optimizations would
 still require new bytecode to not assume them.  (Deoptimizing
 *all* functions, in *all* contexts, is not a sensible tradeoff.)

I'm afraid I'm only familiar with CPython, but wouldn't callee locals
just map to XXX_FAST indices via the caller's co_names tuple?

Remapping jump targets, on the other hand, would be something to quickly
ban. You shouldn't be able to write trash like:

defmacro keepgoing:
else:
continue

[Guido]
 Try to make sure that it can be used in a statement context
 as well as in an expression context.
...
 I'm trying to sensitize you to potential uses like this:
 
 def foo(b):
 c=a
 def bar():
 a = a1
 print collapse foo(b1)

If the callees aren't real functions and don't get passed anything, the
sensible approach would be to disallow expression-context use of them.
Rewrite the above to:

defcallee foo:
c = a

def bar():
a = a1
collapse foo
print c


Robert Brewer
MIS
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


RE: [Python-Dev] defmacro (was: Anonymous blocks)

2005-04-25 Thread Robert Brewer
Shane Hathaway wrote:
 Robert Brewer wrote:
  So currently, all subclasses just override __set__, which leads to a
  *lot* of duplication of code. If I could write the base 
 class' __set__
  to call macros like this:
  
  def __set__(self, unit, value):
  self.begin()
  if self.coerce:
  value = self.coerce(unit, value)
  oldvalue = unit._properties[self.key]
  if oldvalue != value:
  self.pre()
  unit._properties[self.key] = value
  self.post()
  self.end()
  
  defmacro begin:
  pass
  
  defmacro pre:
  pass
  
  defmacro post:
  pass
  
  defmacro end:
  pass
 
 Here is a way to write that using anonymous blocks:
 
 def __set__(self, unit, value):
 with self.setting(unit, value):
 if self.coerce:
 value = self.coerce(unit, value)
 oldvalue = unit._properties[self.key]
 if oldvalue != value:
 with self.changing(oldvalue, value):
 unit._properties[self.key] = value
 
 def setting(self, unit, value):
   # begin code goes here
 yield None
 # end code goes here
 
 def changing(self, oldvalue, newvalue):
 # pre code goes here
 yield None
 # post code goes here
 
...
 Which do you prefer?  I like fewer methods. ;-)

I still prefer more methods, because my actual use-cases are more
complicated. Your solution would work for the specific case I gave, but
try factoring in:

* A subclass which needs to share locals between begin and post, instead
of pre and post.

or

* A set of 10 subclasses which need the same begin() but different end()
code.

Yielding seems both too restrictive and too inside-out to be readable,
IMO.


Robert Brewer
MIS
Amor Ministries
[EMAIL PROTECTED]
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com