> S 3121 Module Initialization and finalization von Löwis
>
> I like it. I wish the title were changed to "Extension Module ..." though.
Done!
Martin
___
Python-3000 mailing list
Python-3000@python.org
http://mail.python.org/mailman/listinfo/p
Raymond> [Skip]
>> I use it all the time. For example, to build up (what I consider to be)
>> readable SQL queries:
>>
>> rows = self.executesql("select cities.city, state, country"
>>"from cities, venues, events, addresses"
>>
> S 3120 Using UTF-8 as the default source encoding von Löwis
>
> The basic idea seems very reasonable. I expect that the changes to the
> parser may be quite significant though. Also, the parser ought to be
> weened of C stdio in favor of Python's own I/O library. I wonder if
> it's really p
Georg Brandl wrote:
> FWIW, I'm -1 on both proposals too. I like implicit string literal
> concatenation
> and I really can't see what we gain from backslash continuation removal.
>
> Georg
-1 on removing them also. I find they are helpful.
It could be made optional in block headers that end
Raymond Hettinger wrote:
>>Raymond> I find that style hard to maintain. What is the advantage over
>>Raymond> multi-line strings?
>>
>>Raymond> rows = self.executesql('''
>>Raymond> select cities.city, state, country
>>Raymond> from cities, venues, events, addresses
>>
On 01/05/2007 18.09, Phillip J. Eby wrote:
>> The alternative is to code the automatic finalization steps using
>> weakref callbacks. For those used to using __del__, it takes a little
>> while to learn the idiom but essentially the technique is hold a proxy
>> or ref with a callback to a boundme
"Greg Ewing" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
| Terry Reedy wrote:
| > and hence '=' will not become an operator and hence '=' will not become
| > overloadable.
|
| Actually, '=' *is* overloadable in most cases,
It is not overloadable in the sense I meant, and in the s
On 2 maj 2007, at 20.08, Guido van Rossum wrote:
> [Georg]
>>> a, *b, c = range(5)
>>> a
0
>>> c
4
>>> b
[1, 2, 3]
>
>
> That sounds messy; only allowing *a at the end seems a bit more
> manageable. But I'll hold off until I can shoot holes in your
> impl
Simon Percivall schrieb:
> On 2 maj 2007, at 20.08, Guido van Rossum wrote:
>> [Georg]
a, *b, c = range(5)
a
> 0
c
> 4
b
> [1, 2, 3]
>>
>>
>> That sounds messy; only allowing *a at the end seems a bit more
>> manageable. But I'll hold off
Raymond> Another way to look at it is to ask whether we would consider
Raymond> adding implicit string concatenation if we didn't already have
Raymond> it.
As I recall it was a "relatively recent" addition. Maybe 2.0 or 2.1? It
certainly hasn't been there from the beginning.
Skip
_
Ron Adam wrote:
> The following inconsistency still bothers me, but I suppose it's an edge
> case that doesn't cause problems.
>
> >>> print r"hello world\"
>File "", line 1
> print r"hello world\"
> ^
> SyntaxError: EOL while scanning single-quoted string
> In
Benji York schrieb:
> Ron Adam wrote:
>> The following inconsistency still bothers me, but I suppose it's an edge
>> case that doesn't cause problems.
>>
>> >>> print r"hello world\"
>>File "", line 1
>> print r"hello world\"
>> ^
>> SyntaxError: EOL while scannin
Benji York wrote:
> Ron Adam wrote:
>> The following inconsistency still bothers me, but I suppose it's an edge
>> case that doesn't cause problems.
>>
>> >>> print r"hello world\"
>>File "", line 1
>> print r"hello world\"
>> ^
>> SyntaxError: EOL while scanning
On Thursday 03 May 2007, Georg Brandl wrote:
> Is that something that can be agreed upon without a PEP?
I expect this to be at least somewhat controversial, so a PEP is warranted.
I'd like to see it fixed, though.
-Fred
--
Fred L. Drake, Jr.
> "skip" == skip <[EMAIL PROTECTED]> writes:
Raymond> Another way to look at it is to ask whether we would consider
Raymond> adding implicit string concatenation if we didn't already have
Raymond> it.
skip> As I recall it was a "relatively recent" addition. Maybe 2.0 or
On 5/3/07, Greg Ewing <[EMAIL PROTECTED]> wrote:
> I don't doubt that things like @before and @after are
> handy. But being handy isn't enough for something to
> get into the Python core.
I hadn't thought of @before and @after as truly core; I had assumed
they were decorators that would be availa
Barry Warsaw writes:
> The problem is that
>
> _("some string"
>" and more of it")
>
> is not the same as
>
> _("some string" +
>" and more of it")
Are you worried about translators? The gettext functions themselves
will just see the result of the operation.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On May 3, 2007, at 10:40 AM, Stephen J. Turnbull wrote:
> Barry Warsaw writes:
>
>> The problem is that
>>
>> _("some string"
>>" and more of it")
>>
>> is not the same as
>>
>> _("some string" +
>>" and more of it")
>
> Are
At 10:16 AM 5/3/2007 -0400, Jim Jewett wrote:
>On 5/3/07, Greg Ewing <[EMAIL PROTECTED]> wrote:
>
> > I don't doubt that things like @before and @after are
> > handy. But being handy isn't enough for something to
> > get into the Python core.
>
>I hadn't thought of @before and @after as truly core;
On 5/3/07, Simon Percivall <[EMAIL PROTECTED]> wrote:
> On 2 maj 2007, at 20.08, Guido van Rossum wrote:
> > [Georg]
> >>> a, *b, c = range(5)
> >>> a
> 0
> >>> c
> 4
> >>> b
> [1, 2, 3]
> >
> >
> > That sounds messy; only allowing *a at the end seems a b
Steven Bethard schrieb:
> On 5/3/07, Simon Percivall <[EMAIL PROTECTED]> wrote:
>> On 2 maj 2007, at 20.08, Guido van Rossum wrote:
>> > [Georg]
>> >>> a, *b, c = range(5)
>> >>> a
>> 0
>> >>> c
>> 4
>> >>> b
>> [1, 2, 3]
>> >
>> >
>> > That sounds messy;
Hi,
One item that I haven't seen mentioned in support of this is that
there is code that uses getattr for accessing things that might be
access other ways.
For example the Attribute access Dictionaries
(http://mail.python.org/pipermail/python-list/2007-March/429137.html),
if one of the keys has a
On 5/3/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
> Steven Bethard schrieb:
> > On 5/3/07, Simon Percivall <[EMAIL PROTECTED]> wrote:
> >> On 2 maj 2007, at 20.08, Guido van Rossum wrote:
> >> > [Georg]
> >> >>> a, *b, c = range(5)
> >> >>> a
> >> 0
> >> >>> c
> >> 4
On 5/3/07, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> Untangling the parser from stdio - sure. I also think it would
> be desirable to read the whole source into a buffer, rather than
> applying a line-by-line input. That might be a bigger change,
> making the tokenizer a multi-stage algorithm:
On 5/2/07, Andrew Koenig <[EMAIL PROTECTED]> wrote:
Looking at PEP-3125, I see that one of the rejected alternatives is to
allow
any unfinished expression to indicate a line continuation.
I would like to suggest a modification to that alternative that has worked
successfully in another programm
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On May 3, 2007, at 12:41 PM, Stephen J. Turnbull wrote:
> Barry Warsaw writes:
>
>> IMO, this is a problem. We can make the Python extraction tool work,
>> but we should still be very careful about breaking 3rd party tools
>> like xgettext, since oth
On 5/3/07, Fred L. Drake, Jr. <[EMAIL PROTECTED]> wrote:
> On Thursday 03 May 2007, Georg Brandl wrote:
> > Is that something that can be agreed upon without a PEP?
>
> I expect this to be at least somewhat controversial, so a PEP is warranted.
> I'd like to see it fixed, though.
It's too late fo
On 5/3/07, Georg Brandl <[EMAIL PROTECTED]> wrote:
> > These are raw strings if you didn't notice.
>
> It's all in the implementation. The tokenizer takes it as an escape sequence
> -- it doesn't specialcase raw strings -- the AST builder (parsestr() in ast.c)
> doesn't.
FWIW, it wasn't designed t
Guido van Rossum schrieb:
> On 5/3/07, Fred L. Drake, Jr. <[EMAIL PROTECTED]> wrote:
>> On Thursday 03 May 2007, Georg Brandl wrote:
>> > Is that something that can be agreed upon without a PEP?
>>
>> I expect this to be at least somewhat controversial, so a PEP is warranted.
>> I'd like to see it
>> 1. read input into a buffer
>> 2. determine source encoding (looking at a BOM, else a
>>declaration within the first two lines, else default
>>to UTF-8)
>> 3. if the source encoding is not UTF-8, pass it through
>>a codec (decode to string, encode to UTF-8). Otherwise,
>>check th
Giovanni Bajo wrote:
> On 01/05/2007 18.09, Phillip J. Eby wrote:
> > That means that if 'self' in your example above is collected, then
> > the weakref no longer exists, so the closedown won't be called.
>
> Yes, but as far as I understand it, the GC does special care to ensure that
> the callb
Simon Percivall wrote:
> if the proposal is constrained to only allowing the *name at
> the end, wouldn't a more useful behavior be to not exhaust the
> iterator, making it similar to:
>
> > it = iter(range(10))
> > a = next(it)
> > b = it
>
> or would this be too surprising?
It would surpris
Steven Bethard wrote:
> This brings up the question of why the patch produces lists, not
> tuples. What's the reasoning behind that?
When dealing with an iterator, you don't know the
length in advance, so the only way to get a tuple
would be to produce a list first and then create
a tuple from it
In all the threads about this PEP I still haven't seen a single
example of how to write a finalizer.
Let's take a specific example of a file object (this occurs in io.py
in the p3yk branch). When a write buffer is GC'ed it must be flushed.
The current way of writing this is simple:
class Buffered
From: "Greg Ewing" <[EMAIL PROTECTED]>
> It has nothing to do with cyclic GC. The point is that
> if the refcount of a weak reference drops to zero before
> that of the object being weakly referenced, the weak
> reference object itself is deallocated and its callback
> is *not* called. So having th
Barry Warsaw writes:
> IMO, this is a problem. We can make the Python extraction tool work,
> but we should still be very careful about breaking 3rd party tools
> like xgettext, since other projects may be using such tools.
But
_("some string" +
" and more of it")
is a
On Thursday 03 May 2007 15:40, Stephen J. Turnbull wrote:
> Teaching Python-based extraction tools about it isn't hard, just make
> sure that you slurp in the whole argument, and eval it.
We generate our component documentation based on going through the AST
generated by compiler.ast, finding doc
Michael Sparks writes:
> We generate our component documentation based on going through the AST
> generated by compiler.ast, finding doc strings (and other strings in
> other known/expected locations), and then formatting using docutils.
Are you talking about I18N and gettext? If so, I'm real
38 matches
Mail list logo