Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Kay Schluehr
Nick Coghlan wrote:
> Guido van Rossum wrote:
> 
>>>Recommend accepting just the basic PEP which only targets simple,
>>>obvious cases.  The discussed extensions are unattractive and should be
>>>skipped.
>>
>>
>>-1. The "unary colon" looks unPythonic to me.
>>
> 
> 
> Step 1 would be to require parentheses around the whole thing (ala 
> generator expressions) to make it easier to see where the deferred 
> expression ends.
> 
> But all my use cases that I can think off the top of my head involve 
> 'sorted', where it wouldn't help at all because of the need for an 
> argument.
> 
> So I'd rather see a serious discussion regarding giving lambdas a more 
> Pythonic syntax in general, rather than one that only applied to the 
> 'no-argument' case [1]
> 
> Cheers,
> Nick.
> 
> [1] http://wiki.python.org/moin/AlternateLambdaSyntax
> The 'expression-before-args' version using just the 'from' keyword is 
> still my favourite.
> 

Maybe anonymus function closures should be pushed forward right now not 
only syntactically? Personally I could live with lambda or several
of the alternative syntaxes listed on the wiki page.

But asking for a favourite syntax I would skip the "def" keyword from 
your def-arrow syntax proposal and use:

((a, b, c) -> f(a) + o(b) - o(c))
((x) -> x * x)
(() -> x)
((*a, **k) -> x.bar(*a, **k))
( ((x=x, a=a, k=k) -> x(*a, **k)) for x, a, k in funcs_and_args_list)

The arrow is a straightforward punctuation for function definitions. 
Reusing existing keywords for different semantics seems to me as a kind 
of inbreeding.

For pushing anymus functions forward I propose to enable explizit 
partial evaluation as a programming technique:

Example 1:

 >>> ((x,y) -> (x+1)*y**2)
((x,y) -> (x+1)*y**2)

 >>> ((x,y) -> (x+1)*y**2)(x=5)
((y) -> 6*y**2)


Example 2:

def f(x):
 return x**2

 >>> ((x,y) -> f(x)+f(y))(x=2)
((y) -> 4 + f(y))


Example 3:

 >>> ((f,x,y) -> f(x)+f(y))(f=((x)-> x**2), y=3)
((x) -> ((x)-> x**2))(x)+9)

The keyword style argument passing can be omitted in case of complete
evaluation where pattern matching on the argument tuple is applied.

Regards,
Kay











___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Josiah Carlson

Kay Schluehr <[EMAIL PROTECTED]> wrote:
> Maybe anonymus function closures should be pushed forward right now not 
> only syntactically? Personally I could live with lambda or several
> of the alternative syntaxes listed on the wiki page.

> But asking for a favourite syntax I would skip the "def" keyword from 
> your def-arrow syntax proposal and use:
> 
> ((a, b, c) -> f(a) + o(b) - o(c))
...

> The arrow is a straightforward punctuation for function definitions. 
> Reusing existing keywords for different semantics seems to me as a kind 
> of inbreeding.

That's starting to look like the pseudocode from old algorithms
textbooks, which is very similar to bad pseudocode from modern CS theory
papers.  Punctuation as a replacement for words does not always win
(perfect examples being 'and' vs. &&, 'or' vs. ||, 'not' vs. !, ...)

-1 on the syntax offering.

> For pushing anymus functions forward I propose to enable explizit 
> partial evaluation as a programming technique:

If I remember correctly, we've got rightcurry and leftcurry for that (or
rightpartial and leftpartial, or something).

>  >>> ((x,y) -> (x+1)*y**2)
> ((x,y) -> (x+1)*y**2)
> 
>  >>> ((x,y) -> (x+1)*y**2)(x=5)
> ((y) -> 6*y**2)

I'll assume that you don't actually want it to rewrite the source, or
actually return the source representation of the anonymous function
(those are almost non-starters).

As for all anonymous functions allowing partial evaluation via keywords:
it would hide errors.  Right now, if you forget an argument or add too
many arguments, you get a TypeError.  Your proposal would make
forgetting an argument in certain ways return a partially evaluated
function.

-1 on partial evaluation.


 - Josiah

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Propose updating PEP 284 -- Integer for-loops

2005-06-18 Thread Raymond Hettinger
I recommend that the proposed syntax be altered to be more parallel with
the existing for-loop syntax to make it more parsable for both humans
and for the compiler.  Like existing for-statements, the target
expression should immediately follow the 'for' keyword.  Since this is
known to be a range assignment, only an expression is needed, not a full
expression list.  Immediately following should be a token to distinguish
the new and old syntaxes.  Putting that distinction early in the
statement prepares readers (who scan left-to-right) for what follows.

IOW, augment the existing syntax:

  for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]

with an alternative syntax in the form:

  for_stmt: 'for' expr 'tok' rangespec ':' suite ['else' ':' suite]

Given that form, the PEP authors can choose the best options for 'tok'.
Basically, anything will do as long as it is not 'in'.  Likewise, they
can choose any rangespec format.  Within that framework, there are many
possibilities:

for i between 2 < i <= 10: ...
for i over 2 < i <= 10: ... # chained comparison style
for i over [2:11]: ...  # Slice style
for i = 3 to 10:  ...   # Basic style

The rangespecs with comparison operators offer more flexibility in terms
of open/closed intervals.  In contrast, the slice notation version and
the Basic versions can admit a step argument.

The key changes are stating the target variable first and then using a
specific token to distinguish between the two forms.

Also, I recommend tightening the PEP's motivation.  There are only two
advantages, encoding and readability.  The former is only a minor gain
because all it saves is a function call, an O(1) savings in an O(n)
context.  The latter is where the real benefits lay.

The PEP authors should also put to rest the issues section:

* Decide once and for all on the simplest approach of immediately
evaluating the whole rangespec prior to execution. This best parallels
the range() approach and it is the least surprising.

* Note that new proposal works equally well with list comps and genexps.

* Decide to always return an iterator rather than a list as there is
never an advantage to doing otherwise.

* If you go for a chained comparison style rangespec, then drop the
issue of a step argument.  If the slice or Basic style rangespecs are
chosen, then there is no reason not to allow a step argument.

* Competition with PEP 276 is no longer an issue.

* Simply disallow floating point bounds.  We've already got deprecation
warnings in place for Py2.5.  There is no need to exacerbate the
problem.  Alternately, simplify the issue by declaring that the values
will be handled as if by xrange().

The above recommendations should get the PEP ready for judgement day.
Good luck.



Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is PEP 237 final -- Unifying Long Integers and Integers

2005-06-18 Thread Keith Dart
Guido van Rossum wrote:

>On 6/17/05, Raymond Hettinger <[EMAIL PROTECTED]> wrote:
>  
>
>>IIRC, there was a decision to not implement phase C and to keep the
>>trailing L in representations of long integers.
>>
>>
>
>Actually, the PEP says phase C will be implemented in Python 3.0 and
>that's still my plan.
>
>  
>
>>If so, I believe the PEP can be marked as final.  We've done all we're
>>going to do.
>>
>>
>
>For 2.x, yes. I'm fine with marking it as Final and adding this to PEP
>3000 instead.
>
>  
>
I am very concernced about something. The following code breaks with 2.4.1:

fcntl.ioctl(self.rtc_fd, RTC_RD_TIME, ...)

Where RTC_RD_TIME = 2149871625L

In Python 2.3 it is -2145095671.

Actually, this is supposed to be an unsigned int, and it was construced
with hex values and shifts.

Now, with the integer unification, how is ioctl() supposed to work? I
cannot figure out how to make it work in this case.

I suppose the best thing is to introduce an "unsignedint" type for this
purpose. As it is right now, I cannot use 2.4 at all.



-- 

-- ~
   Keith Dart <[EMAIL PROTECTED]>
   public key: ID: F3D288E4
   =

begin:vcard
fn:Keith Dart
n:Dart;Keith
email;internet:[EMAIL PROTECTED]
tel;work:408-249-1830
tel;fax:408-249-1830
tel;home:408-296-5806
x-mozilla-html:FALSE
version:2.1
end:vcard

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose updating PEP 284 -- Integer for-loops

2005-06-18 Thread Nick Coghlan
Raymond Hettinger wrote:
> Also, I recommend tightening the PEP's motivation.  There are only two
> advantages, encoding and readability.  The former is only a minor gain
> because all it saves is a function call, an O(1) savings in an O(n)
> context.  The latter is where the real benefits lay.

The introduction of 'enumerate', and the proliferation of better 
iterators that reduce the need for pure-integer loops should be 
adressed in the revised motivation. I know my use of 'range' drops 
close to zero when I'm working with Python versions that supply 
'enumerate'.

Even when I do have a pure integer loop, I'll often assign the result 
of the range/xrange call to a local variable, just so I can give it a 
name that indicates the *significance* of that particular bunch of 
numbers.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Kay Schluehr
Josiah Carlson wrote:

 > Kay Schluehr <[EMAIL PROTECTED]> wrote:
 >
 >
 >> Maybe anonymus function closures should be pushed forward right now 
not only syntactically? Personally I could live with lambda or several
 >> of the alternative syntaxes listed on the wiki page.
 >>
 >
 >
 >
 >
 >> But asking for a favourite syntax I would skip the "def" keyword 
from your def-arrow syntax proposal and use:
 >>
 >>((a, b, c) -> f(a) + o(b) - o(c))
 >>
 >
 > ...
 >
 >
 >
 >> The arrow is a straightforward punctuation for function definitions. 
Reusing existing keywords for different semantics seems to me as a kind 
of inbreeding.
 >>
 >
 >
 > That's starting to look like the pseudocode from old algorithms
 > textbooks, which is very similar to bad pseudocode from modern CS theory
 > papers.  Punctuation as a replacement for words does not always win
 > (perfect examples being 'and' vs. &&, 'or' vs. ||, 'not' vs. !, ...)
 >
 >

Writing functions as arrows is very convenient not only in CS but also 
in mathematics. Looking like pseudo-code was not one of Guidos Python 
regrets if I remember them correctly.

 > -1 on the syntax offering.
 >
 >
 >
 >> For pushing anymus functions forward I propose to enable explizit 
partial evaluation as a programming technique:
 >>
 >
 >
 > If I remember correctly, we've got rightcurry and leftcurry for that (or
 > rightpartial and leftpartial, or something).
 >
 >

Currying usually does not perform a function evaluation in order to 
create another more special function. Partial evaluation is a dynamic 
programming and optimization technique. Psyco uses specialization and
caching implicitely. I propose to use it explicitely but in a more 
restricted context.

 >> >>> ((x,y) -> (x+1)*y**2)
 >> ((x,y) -> (x+1)*y**2)
 >>
 >> >>> ((x,y) -> (x+1)*y**2)(x=5)
 >> ((y) -> 6*y**2)
 >>
 >
 >
 > I'll assume that you don't actually want it to rewrite the source, or
 > actually return the source representation of the anonymous function
 > (those are almost non-starters).
 >
 >

Computer algebra systems store expressions in internal tree form, 
manipulate them efficiently and pretty-print them in textual or latex 
output on demand. There would be much more involved than a tree to tree 
translation starting with Pythons internal parse tree and an evaluator 
dedicated to it.

 > As for all anonymous functions allowing partial evaluation via keywords:
 > it would hide errors. Right now, if you forget an argument or add too
 > many arguments, you get a TypeError. Your proposal would make
 > forgetting an argument in certain ways return a partially evaluated
 > function.


That's why I like to dump the function in a transparent mode. Personally 
I could dispense a little security in favor for cheap metainformation.

Regards,
Kay




___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] refcounting vs PyModule_AddObject

2005-06-18 Thread Michael Hudson
"Martin v. Löwis" <[EMAIL PROTECTED]> writes:

> Michael Hudson wrote:
>> I've been looking at this area partly to try and understand this bug:
>> 
>> [ 1163563 ] Sub threads execute in restricted mode
>> 
>> but I'm not sure the whole idea of multiple interpreters isn't
>> inherently doomed :-/
>
> That's what Tim asserts, saying that people who want to use the
> feature should fix it themselves.

Well, they've tried, and I think I've worked out a proper fix (life
would be easier if people didn't check in borken code :).

Cheers,
mwh

-- 
  Premature optimization is the root of all evil.
   -- Donald E. Knuth, Structured Programming with goto Statements
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose updating PEP 284 -- Integer for-loops

2005-06-18 Thread Michael Hudson
"Raymond Hettinger" <[EMAIL PROTECTED]> writes:

> I recommend that the proposed syntax be altered to be more parallel
> with the existing for-loop syntax to make it more parsable for both
> humans and for the compiler.

Although all your suggestions are improvments, I'm still -1 on the PEP.

Cheers,
mwh

-- 
  Windows installation day one.  Getting rid of the old windows 
  was easy - they fell apart quite happily, and certainly wont 
  be re-installable anywhere else.   -- http://www.linux.org.uk/diary/
   (not *that* sort of windows...)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Is PEP 237 final -- Unifying Long Integers and Integers

2005-06-18 Thread Michael Hudson
Keith Dart <[EMAIL PROTECTED]> writes:

> I am very concernced about something. The following code breaks with 2.4.1:
>
> fcntl.ioctl(self.rtc_fd, RTC_RD_TIME, ...)
>
> Where RTC_RD_TIME = 2149871625L
>
> In Python 2.3 it is -2145095671.

Well, you could always use "-2145095671"...

> Actually, this is supposed to be an unsigned int, and it was construced
> with hex values and shifts.

But well, quite.

> Now, with the integer unification, how is ioctl() supposed to work? I
> cannot figure out how to make it work in this case.

The shortest way I know of going from 2149871625L to -2145095671 is
the still-fairly-gross:

>>> v = 2149871625L
>>> ~int(~v&0x)
-2145095671

> I suppose the best thing is to introduce an "unsignedint" type for this
> purpose. 

Or some kind of bitfield type, maybe.

C uses integers both as bitfields and to count things, and at least in
my opinion the default assumption in Python should be that this is
what an integer is being used for, but when you need a bitfield it can
all get a bit horrible.

That said, I think in this case we can just make fcntl_ioctl use the
(new-ish) 'I' format argument to PyArg_ParseTuple and then you'll just
be able to use 2149871625L and be happy (I think, haven't tried this).

> As it is right now, I cannot use 2.4 at all.

/Slightly/ odd place to mae this report!  Hope this mail helped.

Cheers,
mwh

-- 
  I'm okay with intellegent buildings, I'm okay with non-sentient
  buildings. I have serious reservations about stupid buildings.
 -- Dan Sheppard, ucam.chat (from Owen Dunn's summary of the year)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Oren Tirosh
Please don't invent new serialization formats. I think we have enough
of those already.

The RFE suggests that "the protocol is specified in the documentation,
precisely enough to write interoperating implementations in other
languages". If interoperability with other languages is really the
issue, use an existing format like JSON.

If you want an efficient binary format you can use a subset of the
pickle protocol supporting only basic types. I tried this once. I
ripped out all the fancy parts from pickle.py and left only binary
pickling (protocol version 2) of basic types. It took less than hour
and I was left with something only marginally more complex than your
new proposed protocol.

  Oren
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Simon Wittber
> The RFE suggests that "the protocol is specified in the documentation,
> precisely enough to write interoperating implementations in other
> languages". If interoperability with other languages is really the
> issue, use an existing format like JSON.

JSON is slow (this is true of the python version, at least). Whether
it is slow because of the implementation, or because of its textual
nature, I do not know. The implementation I tested also failed to
encode {1:2}. I am not sure if this is a problem with JSON or the
implementation.

> If you want an efficient binary format you can use a subset of the
> pickle protocol supporting only basic types. I tried this once. I
> ripped out all the fancy parts from pickle.py and left only binary
> pickling (protocol version 2) of basic types. It took less than hour
> and I was left with something only marginally more complex than your
> new proposed protocol.

I think you are missing the point. Is your pickle hack available for
viewing? If it, or JSON is a better choice, then so be it. The point
of the PEP is not the protocol, but the need for a documented,
efficient, _safe_ serializion module in the standard library.

Do you disagree?


Simon Wittber.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Implementing PEP 342 (was Re: Withdrawn PEP 288 and thoughts on PEP 342)

2005-06-18 Thread Phillip J. Eby
At 04:55 PM 6/18/2005 +0300, Oren Tirosh wrote:
>On 6/18/05, Phillip J. Eby <[EMAIL PROTECTED]> wrote:
> > At 08:03 PM 6/16/2005 -0700, Guido van Rossum wrote:
> > It turns out that making 'next(EXPR)' work is a bit tricky; I was going to
> > use METH_COEXIST and METH_VARARGS, but then it occurred to me that
> > METH_VARARGS adds overhead to normal Python calls to 'next()', so I
> > implemented a separate 'send(EXPR)' method instead, and left 'next()' a
> > no-argument call.
>
>Please use the name "feed", not "send". That would make the enhanced
>generators already compatible "out of the box" with existing code
>expecting the de-facto consumer interface (see
>http://effbot.org/zone/consumer.htm).  The other part of the consumer
>interface (the close() method) is already being added in PEP 343.

Hm.  Do you want reset() as well?  :)

More seriously, I'm not sure that PEP 343's close() does something 
desirable for the consumer interface.  Overall, this sounds like something 
that should be added to the PEPs though.  I hadn't really thought of using 
inbound generator communication for parsing; it's an interesting use case.

However, looking more closely at the consumer interface, it seems to me the 
desired semantics for feed() are different than for send(), because of the 
"just-started generator can't receive data" problem.  Also, the consumer 
interface doesn't include handling for StopIteration.

Maybe feed() should prime the generator if it's just started, and throw 
away the yield result as long as it's None?  Maybe it should ignore 
StopIteration?  Perhaps it should raise an error if the generator yields 
anything but None in response?  These seem like questions worth discussing.



___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Multiple expression eval in compound if statement?

2005-06-18 Thread Gerrit Holl
Hi,

Raymond Hettinger wrote:
> I think it unwise to allow x to be any expression.  Besides altering
> existing semantics, it leads to code redundancy and to a fragile
> construct (where the slightest alteration of any of the expressions
> triggers a silent reversion to O(n) behavior).

What would happen if 'x' were to be an object whose class has a __eq__
that is defined in an odd way, e.g. having side effects?
Might this mean behaviour change even if 'x' is a local variable?

yours,
Gerrit Holl.

-- 
Weather in Twenthe, Netherlands 18/06 17:25:
24.0°C   wind 3.1 m/s NE (57 m above NAP)
-- 
In the councils of government, we must guard against the acquisition of
unwarranted influence, whether sought or unsought, by the
military-industrial complex. The potential for the disastrous rise of
misplaced power exists and will persist.
-Dwight David Eisenhower, January 17, 1961
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Multiple expression eval in compound if statement?

2005-06-18 Thread Raymond Hettinger
[Raymond Hettinger]
> > I think it unwise to allow x to be any expression.  Besides altering
> > existing semantics, it leads to code redundancy and to a fragile
> > construct (where the slightest alteration of any of the expressions
> > triggers a silent reversion to O(n) behavior).

[Gerrit Holl]
> What would happen if 'x' were to be an object whose class has a __eq__
> that is defined in an odd way, e.g. having side effects?

Every molecule in your body would simultaneously implode at the speed of
light.


Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Skip Montanaro

Why this discussion of yet another serialization format?  The wire-encoding
for XML-RPC is quite stable, handles all the basic Python types proposed in
the proto-PEP, and is highly interoperable.  If performance is an issue,
make sure you have a C-based accelerator module like sgmlop installed.  If
size is an issue, gzip it before sending it over the wire or to a file.

Skip
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose updating PEP 284 -- Integer for-loops

2005-06-18 Thread Guido van Rossum
On 6/18/05, Michael Hudson <[EMAIL PROTECTED]> wrote:
> "Raymond Hettinger" <[EMAIL PROTECTED]> writes:
> 
> > I recommend that the proposed syntax be altered to be more parallel
> > with the existing for-loop syntax to make it more parsable for both
> > humans and for the compiler.
> 
> Although all your suggestions are improvments, I'm still -1 on the PEP.

Same here. The whole point (15 years ago) of range() was to *avoid*
needing syntax to specify a loop over numbers. I think it's worked out
well and there's nothing that needs to be fixed (except range() needs
to become an interator, which it will in Python 3.0).

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose updating PEP 284 -- Integer for-loops

2005-06-18 Thread Raymond Hettinger
 [Raymond Hettinger]
> > > I recommend that the proposed syntax be altered to be more
parallel
> > > with the existing for-loop syntax to make it more parsable for
both
> > > humans and for the compiler.

[Michael Hudson]
> > Although all your suggestions are improvments, I'm still -1 on the
PEP.

[Guido]
> Same here. The whole point (15 years ago) of range() was to *avoid*
> needing syntax to specify a loop over numbers. I think it's worked out
> well and there's nothing that needs to be fixed (except range() needs
> to become an interator, which it will in Python 3.0).

I concur.

Saying that no form of the idea is viable will save the PEP authors from
another round or two of improvements.

Marking as rejected and noting why.



Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PyPI: no space left on device

2005-06-18 Thread Gustavo Niemeyer
PyPI seems to be out of space:

% ./setup.py register --show-response
running register
Using PyPI login from /home/niemeyer/.pypirc
---
Error...

There's been a problem with your request

psycopg.ProgrammingError: ERROR:  could not extend relation "releases":
No space left on device
HINT:  Check free disk space.

-- 
Gustavo Niemeyer
http://niemeyer.net
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPI: no space left on device

2005-06-18 Thread Aahz
On Sat, Jun 18, 2005, Gustavo Niemeyer wrote:
>
> PyPI seems to be out of space:

FYI, python-dev is not a good place to send messages like this.  Please
use [EMAIL PROTECTED]  (I've already notified the appropriate
parties.)
-- 
Aahz ([EMAIL PROTECTED])   <*> http://www.pythoncraft.com/

f u cn rd ths, u cn gt a gd jb n nx prgrmmng.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPI: no space left on device

2005-06-18 Thread Gustavo Niemeyer
> > PyPI seems to be out of space:
> 
> FYI, python-dev is not a good place to send messages like this.  Please
> use [EMAIL PROTECTED]  (I've already notified the appropriate
> parties.)

Before sending the message I thought, "let's see how much time it
takes until someone mentions the right place to deliver the message".
Adding that address to the PyPI page itself would be valueable, and
will probably save python-dev from further misinformed reporters.

Thanks for forwarding it this time,

-- 
Gustavo Niemeyer
http://niemeyer.net
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] gcmodule issue w/adding __del__ to generator objects

2005-06-18 Thread Phillip J. Eby
Working on the PEP 342/343 generator enhancements, I've got working 
send/throw/close() methods, but am not sure how to deal with getting 
__del__ to invoke close().  Naturally, I can add a "__del__" entry to its 
methods list easily enough, but the 'has_finalizer()' function in 
gcmodule.c only checks for a __del__ attribute on instance objects, and for 
tp_del on heap types.

It looks to me like the correct fix would be to check for tp_del always, 
not just on heap types.  However, when I tried this, I started getting 
warnings from the tests, saying that 22 uncollectable objects were being 
created (all generators, in test_generators).

It seems that the tests create cycles via globals(), since they define a 
bunch of generator functions and then call them, saving the generator 
iterators (or objects that reference them) in global variables

after investigating this a bit, it seems to me that either has_finalizer() 
needs to 

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] gcmodule issue w/adding __del__ to generator objects

2005-06-18 Thread Phillip J. Eby
At 05:50 PM 6/18/2005 -0400, Phillip J. Eby wrote:
>Working on the PEP 342/343 generator enhancements, I've got working
>send/throw/close() methods, but am not sure how to deal with getting
>__del__ to invoke close().  Naturally, I can add a "__del__" entry to its
>methods list easily enough, but the 'has_finalizer()' function in
>gcmodule.c only checks for a __del__ attribute on instance objects, and for
>tp_del on heap types.
>
>It looks to me like the correct fix would be to check for tp_del always,
>not just on heap types.  However, when I tried this, I started getting
>warnings from the tests, saying that 22 uncollectable objects were being
>created (all generators, in test_generators).
>
>It seems that the tests create cycles via globals(), since they define a
>bunch of generator functions and then call them, saving the generator
>iterators (or objects that reference them) in global variables
>
>after investigating this a bit, it seems to me that either has_finalizer()
>needs to

Whoops.  I hit send by accident.  Anyway, the issue seems to mostly be that 
the tests create generator-iterators in global variables.  With a bit of 
effort, I've been able to stomp most of the cycles.


>___
>Python-Dev mailing list
>[email protected]
>http://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: 
>http://mail.python.org/mailman/options/python-dev/pje%40telecommunity.com

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] gcmodule issue w/adding __del__ to generator objects

2005-06-18 Thread Phillip J. Eby
Argh!  My email client's shortcut for Send is Ctrl-E, which is the same as 
end-of-line in the editor I've been using all day.  Anyway, the problem is 
that it seems to me as though actually checking for tp_del is too 
aggressive (conservative?) for generators, because sometimes a generator 
object is finished or un-started, and therefore can't resurrect objects 
during close().  However, I don't really know how to implement another 
strategy; gcmodule isn't exactly my forte.  :)  Any input from the GC gurus 
would be appreciated.  Thanks!

At 05:56 PM 6/18/2005 -0400, Phillip J. Eby wrote:
>At 05:50 PM 6/18/2005 -0400, Phillip J. Eby wrote:
> >Working on the PEP 342/343 generator enhancements, I've got working
> >send/throw/close() methods, but am not sure how to deal with getting
> >__del__ to invoke close().  Naturally, I can add a "__del__" entry to its
> >methods list easily enough, but the 'has_finalizer()' function in
> >gcmodule.c only checks for a __del__ attribute on instance objects, and for
> >tp_del on heap types.
> >
> >It looks to me like the correct fix would be to check for tp_del always,
> >not just on heap types.  However, when I tried this, I started getting
> >warnings from the tests, saying that 22 uncollectable objects were being
> >created (all generators, in test_generators).
> >
> >It seems that the tests create cycles via globals(), since they define a
> >bunch of generator functions and then call them, saving the generator
> >iterators (or objects that reference them) in global variables
> >
> >after investigating this a bit, it seems to me that either has_finalizer()
> >needs to
>
>Whoops.  I hit send by accident.  Anyway, the issue seems to mostly be that
>the tests create generator-iterators in global variables.  With a bit of
>effort, I've been able to stomp most of the cycles.

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Generator enhancements patch available

2005-06-18 Thread Phillip J. Eby
I've just submitted patch 1223381 (http://python.org/sf/1223381), which 
implements code and test changes for:

* yield expressions
* bare yield (short for yield None)
* yield in try/finally
* generator.send(value) (send value into generator; substituted for PEP 
342's next(arg))
* generator.throw(typ[,val[,tb]]) (raise error in generator)
* generator.close()
* GeneratorExit built-in exception type
* generator.__del__ (well, the C equivalent)
* All necessary mods to the compiler, parser module, and Python 'compiler' 
package to support these changes.

It was necessary to change a small part of the eval loop (well, the 
initialization, not the loop) and the gc module's has_finalizer() logic in 
order to support a C equivalent to __del__.  Specialists in these areas 
should probably scrutinize this patch!

There is one additional implementation detail that was not contemplated in 
either PEP. in order to prevent used-up generators from retaining 
unnecessary references to their frame's contents, I set the generator's 
gi_frame member to None whenever the generator finishes normally or with an 
error.  Thus, an exhausted generator cannot be part of a cycle, and it 
releases its frame object sooner than in previous Python versions.  For 
generators used only in a direct "for" loop, this makes no difference, but 
for generators used with the iterator protocol (i.e. "gen.next()") from 
Python, this avoids stranding the generator's frame in a traceback cycle.

Anyway, your comments/questions/feedback/bug reports are welcome.

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose to reject PEP 313 -- Adding Roman Numeral Literals to Python

2005-06-18 Thread BJörn Lindqvist
*cough*
Would it also be possible for the PEP-maintainers not to accept PEPs
that are obvious jokes unless thedate is April I?
*uncough*

-- 
mvh Björn
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] gcmodule issue w/adding __del__ to generator objects

2005-06-18 Thread Phillip J. Eby
One more note; I tried changing generators to set their gi_frame to None 
whenever the generator finishes normally or with an error; this eliminated 
most of the reference cycles, and I was able to make test_generators work 
correctly with only 3 explicit close() calls, for the "fun" tests that use 
objects which hold references to generators that in turn reference the object.

So, I think I've got this sorted out, assuming that I'm not doing something 
hideously insane by having 'has_finalizer()' always check tp_del even for 
non-heap types, and defining a tp_del slot for generators to call close() in.

I ended up having to copy a bunch of stuff from typeobject.c in order to 
make this work, as there doesn't appear to be any way to share stuff like 
subtype_del and subtype_dealloc in a meaningful way with the generator code.


At 06:00 PM 6/18/2005 -0400, Phillip J. Eby wrote:
>Argh!  My email client's shortcut for Send is Ctrl-E, which is the same as
>end-of-line in the editor I've been using all day.  Anyway, the problem is
>that it seems to me as though actually checking for tp_del is too
>aggressive (conservative?) for generators, because sometimes a generator
>object is finished or un-started, and therefore can't resurrect objects
>during close().  However, I don't really know how to implement another
>strategy; gcmodule isn't exactly my forte.  :)  Any input from the GC gurus
>would be appreciated.  Thanks!
>
>At 05:56 PM 6/18/2005 -0400, Phillip J. Eby wrote:
> >At 05:50 PM 6/18/2005 -0400, Phillip J. Eby wrote:
> > >Working on the PEP 342/343 generator enhancements, I've got working
> > >send/throw/close() methods, but am not sure how to deal with getting
> > >__del__ to invoke close().  Naturally, I can add a "__del__" entry to its
> > >methods list easily enough, but the 'has_finalizer()' function in
> > >gcmodule.c only checks for a __del__ attribute on instance objects, 
> and for
> > >tp_del on heap types.
> > >
> > >It looks to me like the correct fix would be to check for tp_del always,
> > >not just on heap types.  However, when I tried this, I started getting
> > >warnings from the tests, saying that 22 uncollectable objects were being
> > >created (all generators, in test_generators).
> > >
> > >It seems that the tests create cycles via globals(), since they define a
> > >bunch of generator functions and then call them, saving the generator
> > >iterators (or objects that reference them) in global variables
> > >
> > >after investigating this a bit, it seems to me that either has_finalizer()
> > >needs to
> >
> >Whoops.  I hit send by accident.  Anyway, the issue seems to mostly be that
> >the tests create generator-iterators in global variables.  With a bit of
> >effort, I've been able to stomp most of the cycles.
>
>___
>Python-Dev mailing list
>[email protected]
>http://mail.python.org/mailman/listinfo/python-dev
>Unsubscribe: 
>http://mail.python.org/mailman/options/python-dev/pje%40telecommunity.com

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PyPI: no space left on device

2005-06-18 Thread Martin v. Löwis
Gustavo Niemeyer wrote:
> Before sending the message I thought, "let's see how much time it
> takes until someone mentions the right place to deliver the message".
> Adding that address to the PyPI page itself would be valueable, and
> will probably save python-dev from further misinformed reporters.
> 
> Thanks for forwarding it this time,

Unfortunately, the "right place" depends on the nature of the problem:
could be a PyPI problem, could be a pydotorg problem, could be a
distutils problem.

As for "adding (that) address to the PyPI page itself": How did you
miss the "Get help" and "Bug reports" links below "Contact Us"
on the PyPI page? They would have brought you to the PyPI SF trackers,
but that would also have been the right place.

Regards,
Martin

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Propose to reject PEP 313 -- Adding Roman Numeral Literals to Python

2005-06-18 Thread Martin v. Löwis
BJörn Lindqvist wrote:
> Would it also be possible for the PEP-maintainers not to accept PEPs
> that are obvious jokes unless thedate is April I?

I believe this is the current policy. Why do you think the PEP editor
works differently?

Regards,
Martin
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Donovan Baarda
Kay Schluehr wrote:
> Josiah Carlson wrote:
> 
>  > Kay Schluehr <[EMAIL PROTECTED]> wrote:
>  >
>  >
>  >> Maybe anonymus function closures should be pushed forward right now 
> not only syntactically? Personally I could live with lambda or several
>  >> of the alternative syntaxes listed on the wiki page.

I must admit I ended up deleting most of the "alternative to lambda" 
threads after they flooded my in box. So it is with some dread I post 
this, contributing to it...

As I see it, a lambda is an anonymous function. An anonymous function is 
a function without a name. We already have a syntax for a function... 
why not use it. ie:

  f = filter(def (a): return a > 1, [1,2,3])

The implications of this are that both functions and procedures can be 
anonymous. This also implies that unlike lamba's, anonymous functions 
can have statements, not just expressions. You can even do compound 
stuff like;

   f = filter(def (a): b=a+1; return b>1, [1,2,3])

or if you want you can use indenting;

   f = filter(def (a):
 b=a+1
 return b>1, [1,2,3])

It also means the following becomes valid syntax;

f = def (a,b):
   return a>b

I'm not sure if there are syntactic ambiguities to this. I'm not sure if 
  the CS boffins are disturbed by "side effects" from statements. 
Perhaps both can be resolved by limiting annonymous functions to 
expressions. Or require use of brackets or ";" to resolve ambiguity.

This must have been proposed already and shot down in flames... sorry 
for re-visiting old stuff and contributing noise.

--
Donovan Baarda
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Simon Wittber
> Why this discussion of yet another serialization format?

Pickle is stated to be unsafe. Marshal is also stated to be unsafe.
XML can be bloated, and XML+gzip is quite slow.

Do size,speed, and security features have to mutually exclusive? No,
that possibly is why people have had to invent their own formats. I
can list four off the top of my head:

bencode (bittorrent)
jelly (twisted)
banana (twisted)
tofu (soya3d, looks like it is using twisted now... hmmm)

XML is simply not suitable for database appplications, real time data
capture and game/entertainment applications.

I'm sure other people have noticed this... or am I alone on this issue? :-)

Have a look at this contrived example:

import time

value = (("this is a record",1,2,101,"(08)123123123","some more
text")*1)

import gherkin
t = time.clock()
s = gherkin.dumps(value)
print 'Gherkin encode', time.clock() - t, 'seconds'
t = time.clock()
gherkin.loads(s)
print 'Gherkin decode', time.clock() - t, 'seconds'

import xmlrpclib
t = time.clock()
s = xmlrpclib.dumps(value)
print 'XMLRPC encode', time.clock() - t, 'seconds'
t = time.clock()
xmlrpclib.loads(s)
print 'XMLRPC decode', time.clock() - t, 'seconds'

Which produces the output:

>pythonw -u "bench.py"
Gherkin encode 0.120689361357 seconds
Gherkin decode 0.395871262968 seconds
XMLRPC encode 0.528666352847 seconds
XMLRPC decode 9.01307819849 seconds
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Nick Coghlan
Donovan Baarda wrote:
> As I see it, a lambda is an anonymous function. An anonymous function is 
> a function without a name.

And here we see why I'm such a fan of the term 'deferred expression' 
instead of 'anonymous function'.

Python's lambda expressions *are* the former, but they are 
emphatically *not* the latter.

Anyway, the AlternateLambdaSyntax Wiki page has a couple of relevant 
entries under 'real closures'.

Cheers,
Nick.

-- 
Nick Coghlan   |   [EMAIL PROTECTED]   |   Brisbane, Australia
---
 http://boredomandlaziness.blogspot.com
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] gcmodule issue w/adding __del__ to generator objects

2005-06-18 Thread Neil Schemenauer
On Sat, Jun 18, 2005 at 06:24:48PM -0400, Phillip J. Eby wrote:
> So, I think I've got this sorted out, assuming that I'm not doing
> something hideously insane by having 'has_finalizer()' always
> check tp_del even for non-heap types, and defining a tp_del slot
> for generators to call close() in.

That sounds like the right thing to do.

I suspect the "uncollectable cycles" problem will not be completely
solvable.  With this change, all generators become objects with
finalizers.  In reality, a 'file' object, for example, has a
finalizer as well but it gets away without telling the GC that
because its finalizer doesn't do anything "evil".  Since generators
can do arbitrary things, the GC must assume the worst.

Most cycles involving enhanced generators can probably be broken by
the GC because the generator is not in the strongly connected part
of cycle.  The GC will have to work a little harder to figure that
out but that's probably not too significant.

The real problem is that some cycles involving enhanced generators
will not be breakable by the GC.  I think some programs that used to
work okay are now going to start leaking memory because objects will
accumulate in gc.garbage.

Now, I could be wrong about all this.  I've have not been following
the PEP 343 discussion too closely.  Maybe Guido has some clever
idea.  Also, I find it difficult to hold in my head a complete model
of how the GC now works.  It's an incredibly subtle piece of code.
Perhaps Tim can comment.

  Neil
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Josiah Carlson

Kay Schluehr <[EMAIL PROTECTED]> wrote:
> Josiah Carlson wrote:
> 
>  > Kay Schluehr <[EMAIL PROTECTED]> wrote:
>  >> The arrow is a straightforward punctuation for function definitions. 
>  >> Reusing existing keywords for different semantics seems to me as a kind 
>  >> of inbreeding.
>  >
>  > That's starting to look like the pseudocode from old algorithms
>  > textbooks, which is very similar to bad pseudocode from modern CS theory
>  > papers.  Punctuation as a replacement for words does not always win
>  > (perfect examples being 'and' vs. &&, 'or' vs. ||, 'not' vs. !, ...)
> 
> Writing functions as arrows is very convenient not only in CS but also 
> in mathematics. Looking like pseudo-code was not one of Guidos Python 
> regrets if I remember them correctly.

Apparently you weren't reading what I typed.  I don't find '->' notation
for functions to be readable, either in code, papers, or otherwise.  In
fact, when I teach my CS courses, I use Python, and in the case where I
need to describe mathematical functions, I use a standard mathematical
notation for doing so: 'f(x) = ...'.


>  >> For pushing anymus functions forward I propose to enable explizit 
>  >> partial evaluation as a programming technique:
>  >
>  > If I remember correctly, we've got rightcurry and leftcurry for that (or
>  > rightpartial and leftpartial, or something).
>  >
> Currying usually does not perform a function evaluation in order to 
> create another more special function. Partial evaluation is a dynamic 
> programming and optimization technique. Psyco uses specialization and
> caching implicitely. I propose to use it explicitely but in a more 
> restricted context.

I never claimed that currying was partial function evaluation.  In fact,
if I remember correctly (I don't use curried functions), the particular
currying that was implemented and shipped for Python 2.4 was a simple
function that kept a list of args and kwargs that were already passed to
the curried function. When it got enough arguments, it would go ahead
and evaluate it.


>  > I'll assume that you don't actually want it to rewrite the source, or
>  > actually return the source representation of the anonymous function
>  > (those are almost non-starters).
>  >
> 
> Computer algebra systems store expressions in internal tree form, 
> manipulate them efficiently and pretty-print them in textual or latex 
> output on demand. There would be much more involved than a tree to tree 
> translation starting with Pythons internal parse tree and an evaluator 
> dedicated to it.

Right, and Python is not a CAS.  You want a CAS?  Use Mathematica, Maple,
or some CAS addon to Python.  Keeping the parse tree around when it is
not needed is a waste. In the code that I write, that would be 100% of
the time.


>  > As for all anonymous functions allowing partial evaluation via keywords:
>  > it would hide errors. Right now, if you forget an argument or add too
>  > many arguments, you get a TypeError. Your proposal would make
>  > forgetting an argument in certain ways return a partially evaluated
>  > function.
> 
> 
> That's why I like to dump the function in a transparent mode. Personally 
> I could dispense a little security in favor for cheap metainformation.

It's not about security, it's about practicality and backwards
compatibility.

Here is an example:
>>> x = lambda a,b: a+b+1
>>> x(a=1)
Traceback (most recent call last):
  File "", line 1, in ?
TypeError: () takes exactly 2 non-keyword arguments (1 given)
>>>

According to you, that should instead produce...
  lambda b: 1+b+1

New and old users alike have gotten used to the fact that calling a
function without complete arguments is an error.  You are proposing to
make calling a function with less than the required number of arguments
not be an error.

For the sake of sanity, I have to be -1000 on this.


I'm also not aware of an automatic mechanism for code compilation (in
Python) that both attaches the parse tree to the compiled function, or
allows for the conversion of that parse tree back to a source form, or
allows one to replace variable references with constants.  If you would
care to write the pieces necessary, I'm sure you will find a use for it.

 - Josiah

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Josiah Carlson

Simon Wittber <[EMAIL PROTECTED]> wrote:
> > Why this discussion of yet another serialization format?
> 
> Pickle is stated to be unsafe. Marshal is also stated to be unsafe.
> XML can be bloated, and XML+gzip is quite slow.
> 
> Do size,speed, and security features have to mutually exclusive? No,
> that possibly is why people have had to invent their own formats. I
> can list four off the top of my head:

...


Looks to me like the eval(repr(obj)) loop spanks XMLRPC.  It likely also
spanks Gherkin, but I'm not one to run untrusted code.  Give it a shot
on your own machine.  As for parsing, repr() of standard Python objects
is pretty easy to parse, and if you want something a bit easier to read,
there's always pprint (though it may not be quite as fast).


 - Josiah

Python 2.3.4 (#53, May 25 2004, 21:17:02) [MSC v.1200 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> value = (("this is a record",1,2,101,"(08)123123123","some 
>>> moretext")*1)
>>> import time
>>> t = time.time();y = repr(value);time.time()-t
0.030999898910522461
>>> t = time.time();z = eval(y);time.time()-t
0.2663242492676
>>> import xmlrpclib
>>> t = time.time();n = xmlrpclib.dumps(value);time.time()-t
0.4210381469727
>>> t = time.time();m = xmlrpclib.loads(n);time.time()-t
4.4529998302459717
>>>

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- SimpleImplicit Lambda

2005-06-18 Thread Raymond Hettinger
[Donovan Baarda]
> As I see it, a lambda is an anonymous function. An anonymous function
is
> a function without a name. We already have a syntax for a function...
> why not use it. ie:
> 
>   f = filter(def (a): return a > 1, [1,2,3])

This approach is entirely too obvious.  If we want to be on the leading
edge, we can't be copying what was done years ago in Lua. ;-)


Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] gcmodule issue w/adding __del__ to generator objects

2005-06-18 Thread Phillip J. Eby
At 06:50 PM 6/18/2005 -0600, Neil Schemenauer wrote:
>On Sat, Jun 18, 2005 at 06:24:48PM -0400, Phillip J. Eby wrote:
> > So, I think I've got this sorted out, assuming that I'm not doing
> > something hideously insane by having 'has_finalizer()' always
> > check tp_del even for non-heap types, and defining a tp_del slot
> > for generators to call close() in.
>
>That sounds like the right thing to do.
>
>I suspect the "uncollectable cycles" problem will not be completely
>solvable.  With this change, all generators become objects with
>finalizers.  In reality, a 'file' object, for example, has a
>finalizer as well but it gets away without telling the GC that
>because its finalizer doesn't do anything "evil".  Since generators
>can do arbitrary things, the GC must assume the worst.

Yep.  It's too bad that there's no simple way to guarantee that the 
generator won't resurrect anything.  On the other hand, close() is 
guaranteed to give the generator at most one chance to do this.  So, 
perhaps there's some way we could have the GC close() generators in 
unreachable cycles.  No, wait, that would mean they could resurrect things, 
right?  Argh.


>Most cycles involving enhanced generators can probably be broken by
>the GC because the generator is not in the strongly connected part
>of cycle.  The GC will have to work a little harder to figure that
>out but that's probably not too significant.

Yep; by setting the generator's frame to None, I was able to significantly 
reduce the number of generator cycles in the tests.


>The real problem is that some cycles involving enhanced generators
>will not be breakable by the GC.  I think some programs that used to
>work okay are now going to start leaking memory because objects will
>accumulate in gc.garbage.

Yep, unless we .close() generators after adding them to gc.garbage(), which 
*might* be an option.  Although, I suppose if it *were* an option, then why 
doesn't GC already have some sort of ability to do this?  (i.e. run __del__ 
methods on items in gc.garbage, then remove them if their refcount drops to 
1 as a result).

[...pause to spend 5 minutes working it out in pseudocode...]

Okay, I think I see why you can't do it.  You could guarantee that all 
relevant __del__ methods get called, but it's bloody difficult to end up 
with only unreachable items in gc.garbage afterwards.   I think gc would 
have to keep a new list for items reachable from finalizers, that don't 
themselves have finalizers.  Then, before creating gc.garbage, you walk the 
finalizers and call their finalization (__del__) methods.  Then, you put 
any remaining items that are in either the finalizer list or the 
reachable-from-finalizers list into gc.garbage.

This approach might need a new type slot, but it seems like it would let us 
guarantee that finalizers get called, even if the object ends up in garbage 
as a result.  In the case of generators, however, close() guarantees that 
the generator releases all its references, and so can no longer be part of 
a cycle.  Thus, it would guarantee eventual cleanup of all 
generators.  And, it would lift the general limitation on __del__ methods.

Hm.  Sounds too good to be true.  Surely if this were possible, Uncle Timmy 
would've thought of it already, no?  Guess we'll have to wait and see what 
he thinks.


>Now, I could be wrong about all this.  I've have not been following
>the PEP 343 discussion too closely.  Maybe Guido has some clever
>idea.  Also, I find it difficult to hold in my head a complete model
>of how the GC now works.  It's an incredibly subtle piece of code.
>Perhaps Tim can comment.

I'm hoping Uncle Timmy can work his usual algorithmic magic here and 
provide us with a brilliant but impossible-for-mere-mortals-to-understand 
solution.  (The impossible-to-understand part being optional, of course. :) )

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 --SimpleImplicit Lambda

2005-06-18 Thread Raymond Hettinger
> [Donovan Baarda]
> > As I see it, a lambda is an anonymous function. An anonymous
function
> is
> > a function without a name. We already have a syntax for a
function...
> > why not use it. ie:
> >
> >   f = filter(def (a): return a > 1, [1,2,3])

[Me]
> This approach is entirely too obvious.  If we want to be on the
leading
> edge, we can't be copying what was done years ago in Lua. ;-)

Despite the glib comment, the idea as presented doesn't work because it
mixes statement and expression semantics (the inner 'return' is at best
unpleasant).  Also, the naming is off -- 'def' defines some variable
name. In Lua, the 'def' is called 'function' which doesn't imply a
variable assignment.


Raymond
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Skip Montanaro

Simon> XML is simply not suitable for database appplications, real time
Simon> data capture and game/entertainment applications.

I use XML-RPC as the communications protocol between an Apache web server
and a middleware piece that talks to a MySQL database.  The web server
contains a mixture of CGI scripts written in Python and two websites written
in Mason (Apache+mod_perl).  Performance is fine.  Give either of these a
try:

http://www.mojam.com/
http://www.musi-cal.com/

Simon> I'm sure other people have noticed this... or am I alone on this
Simon> issue? :-)

Probably not.  XML-RPC is commonly thought of as slow, and if you operate
with it in a completely naive fashion, I don't doubt that it can be.  It
doesn't have to be though.  Do you have to be intelligent about caching
frequently used information and returning results in reasonably sized
chunks?  Sure, but that's probably true of any database-backed application.

Simon> Have a look at this contrived example:

...

Simon> Which produces the output:

>> pythonw -u "bench.py"
Simon> Gherkin encode 0.120689361357 seconds
Simon> Gherkin decode 0.395871262968 seconds
Simon> XMLRPC encode 0.528666352847 seconds
Simon> XMLRPC decode 9.01307819849 seconds

That's fine, so XML-RPC is slower than Gherkin.  I can't run the Gherkin
code, but my XML-RPC numbers are a bit different than yours:

XMLRPC encode 0.65 seconds
XMLRPC decode 2.61 seconds

That leads me to believe you're not using any sort of C XML decoder.  (I
mentioned sgmlop in my previous post.  I'm sure /F has some other
super-duper accelerator that's even faster.)

I'm not saying that XML-RPC is the fastest thing on Earth.  I'd be willing
to bet it's a lot more interoperable than Gherkin is though:

http://xmlrpc.scripting.com/directory/1568/implementations

and probably will be for the forseeable future.

Also, as you indicated, your example was a bit contrived.  XML-RPC seems to
be fast enough for many real-world applications.  Here's a somewhat
less-contrived example from the above websites:

>>> orca

>>> t = time.time() ; x = orca.search(('MusicEntry', {'city': 'Chicago, 
IL'}), 0, 200, 50, 1) ; print time.time() - t
1.28429102898
>>> len(x[1])
200
>>> x[1][0]
['MusicEntry', {'venue': 'Jazz Showcase', 'address': '', 'price': '', 
'keywords': ['.jz.1369'], 'event': '', 'city': 'Chicago', 'end': 
datetime.datetime(2005, 6, 19, 0, 0), 'zip': '', 'start': 
datetime.datetime(2005, 6, 14, 0, 0), 'state': 'IL', 'program': '', 'email': 
'[EMAIL PROTECTED]', 'info': '', 'update_time': datetime.datetime(2005, 6, 8, 
0, 0), 'address1': '59 West Grand Avenue', 'address2': '', 'address3': '', 
'venueid': 2630, 'key': 816875, 'submit_time': datetime.datetime(2005, 6, 8, 0, 
0), 'active': 1, 'merchandise': '', 'tickets': '', 'name': '', 'addressid': 
17056, 'performers': ['Cedar Walton Quartet'], 'country': '', 'venueurl': '', 
'time': '', 'performerids': [174058]}]
>>> orca.checkpoint()
'okay'
>>> t = time.time() ; x = orca.search(('MusicEntry', {'city': 'Chicago, 
IL'}), 0, 200, 50, 1) ; print time.time() - t
1.91681599617

orca is an xmlrpclib proxy to the aforementioned middleware component.
(These calls are being made to a production server.)  The search() call gets
the first 200 concert listings within a 50-mile radius of Chicago.  x[1][0]
is the first item returned.  All 200 returned items are of the same
complexity.  1.28 seconds certainly isn't Earth-shattering performance.
Even worse is the 1.92 seconds after the checkpoint (which flushes the
caches forcing the MySQL database to be queried for everything).  Returning
200 items is also contrived.  Real users can't ask for that many items at a
time through the web interface.  Cutting it down to 20 items (which is what
users get by default) shows a different story:

>>> orca.checkpoint()
'okay'
>>> t = time.time() ; x = orca.search(('MusicEntry', {'city': 'Chicago, 
IL'}), 0, 20, 50, 1) ; print time.time() - t
0.29478096962
>>> t = time.time() ; x = orca.search(('MusicEntry', {'city': 'Chicago, 
IL'}), 0, 20, 50, 1) ; print time.time() - t
0.0978591442108

The first query after a checkpoint is slow because we have to go to MySQL a
few times, but once everything's cached, things are fast.  The 0.1 second
time for the last call is going to be almost all XML-RPC overhead, because
all the data's already been cached.  I find that acceptable.  If you go to
the Mojam website and click "Chicago", the above query is pretty much what's
performed (several other queries are also run to get corollary info).

I still find it hard to believe that yet another serialization protocol is
necessary.  XML is certainly overkill for almost everything.  I'll be the
first to admit that.  (I have to struggle with it as a configuration file
format at work.)  However, it is certainly widely available.

Skip
___

Re: [Python-Dev] PEP for RFE 46738 (first draft)

2005-06-18 Thread Simon Wittber
> I use XML-RPC as the communications protocol between an Apache web server
> and a middleware piece that talks to a MySQL database.  The web server
> contains a mixture of CGI scripts written in Python and two websites written
> in Mason (Apache+mod_perl).  Performance is fine.  Give either of these a
> try:

I've also implemented some middleware between Apache and SQL Server,
using XMLRPC. Unfortunately, large queries were the order of the day,
and things started moving like treacle. I profiled the code, and
discovered that 30 seconds of the 35 second response time was being
chewed up in building and parsing XML. I hacked things a bit, and
instead of sending XML, sent pickles inside the XML response. This
dropped response times to around the 8 second mark. This was still not
good enough, as multiple queries could still choke up the system.

Typical queries were about 1000-1 records of 3-15 fields of
varying datatypes, including unicode. Don't ask me why people needed
these kind of reports, I'm just the programmer, and don't always get
to control design decisions. :-) This demonstrates that sometimes, in
real world apps, large amounts of data need to be shipped around, and
it needs to happen fast.

I guess when interoperability counts, XMLRPC is the best solution.
When controlling both ends of the connection, pickle _should_ be best,
except it is stated as being unsecure.

> That's fine, so XML-RPC is slower than Gherkin.  I can't run the Gherkin
> code, but my XML-RPC numbers are a bit different than yours:
> 
> XMLRPC encode 0.65 seconds
> XMLRPC decode 2.61 seconds
> 
> That leads me to believe you're not using any sort of C XML decoder.  

I used the standard library xmlrpclib only.

> I'm not saying that XML-RPC is the fastest thing on Earth.  I'd be willing
> to bet it's a lot more interoperable than Gherkin is though:

Agreed. Interoperability was a request in the aforementioned RFE,
though it was not an explicit goal of mine. I needed a very fast
serialization routine, which could safely unserialize messages from
untrusted sources, for sending big chunks of data over TCP. I also
needed it to be suitable for use in real-time multiplayer games (my
part time hobby). XMLRPC doesn't suit this application. When I control
the server, and the client applications, I shouldn't have to use XML.

Thanks for the feedback.

Simon.
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Propose to reject PEP 294 -- Type Names in the types Module

2005-06-18 Thread Raymond Hettinger
Introducing a new set of duplicate type names and deprecating old ones
causes a certain amount of disruption.  Given the age of the types
module, the disruption is likely to be greater than any potential
benefit that could be realized.  Plenty of people will have to incur the
transition costs, but no one will likely find the benefit to be
perceptible.

Suggest rejecting this PEP and making a note for Py3.0 to either sync-up
the type names or abandon the types module entirely.



Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Question about PEP 330 -- Python Bytecode Verification

2005-06-18 Thread Raymond Hettinger
Do we have *any* known use cases where we would actually run bytecode
that was suspicious enough to warrant running a well-formedness check?

In assessing security risks, the PEP notes, "Practically, it would be
difficult for a malicious user to 'inject' invalid bytecode into a PVM
for the purposes of exploitation, but not impossible."

Can that ever occur without there being a far greater risk of malicious,
but well-formed bytecode?

If you download a file, foo.pyc, from an untrusted source and run it in
a susceptible environment, does its well-formedness give you *any*
feeling of security.  I think not.

There isn't anything wrong with having a verifier module, but I can't
think of any benefit that would warrant changing the bytecode semantics
just to facilitate one of the static stack checks.



Raymond

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Kay Schluehr
Donovan Baarda wrote:

> I must admit I ended up deleting most of the "alternative to lambda" 
> threads after they flooded my in box. So it is with some dread I post 
> this, contributing to it...

I must admit you are right. And I will stop defending my proposal 
because it seems to create nothing more than pointless polemics. 
Suggesting elements of FP to python-dev may have the same chances to be 
accepted than raising a Buddha statue in a catholic church.

> As I see it, a lambda is an anonymous function. An anonymous function is 
> a function without a name. We already have a syntax for a function... 
> why not use it. ie:
> 
>   f = filter(def (a): return a > 1, [1,2,3])

You mix expressions with statements. This is a no-go in Python. 
Permitting those constructs is a radical language change not just a 
simple syntax replacement.

Kay

___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Recommend accepting PEP 312 -- Simple Implicit Lambda

2005-06-18 Thread Donovan Baarda
Nick Coghlan wrote:
> Donovan Baarda wrote:
> 
>>As I see it, a lambda is an anonymous function. An anonymous function is 
>>a function without a name.
> 
> 
> And here we see why I'm such a fan of the term 'deferred expression' 
> instead of 'anonymous function'.

But isn't a function just a deferred expression with a name :-)

As a person who started out writing assembler where every "function" I 
wrote was a macro that got expanded inline, the distiction is kinda 
blurry to me.

> Python's lambda expressions *are* the former, but they are 
> emphatically *not* the latter.

Isn't that because lambda's have the limitation of not allowing 
statements, only expressions? I know this limitation avoids side-effects 
and has significance in some formal (functional?) languages... but is 
that what Python is? In the Python I use, lambda's are always used where 
you are too lazy to define a function to do it's job.

To me, anonymous procedures/functions would be a superset of "deferred 
expressions", and if the one stone fits perfectly in the slingshot we 
have and can kill multiple birds... why hunt for another stone?

Oh yeah Raymond: on the "def defines some variable name"... are you 
joking? You forgot the smiley :-)

I don't get what the problem is with mixing statement and expression 
semantics... from a practial point of view, statements just offer a 
superset of expression functionality.

If there really is a serious practical reason why they must be limited 
to expressions, why not just raise an exception or something if the 
"anonymous function" is too complicated...

I did some fiddling and it seems lambda's can call methods and stuff 
that can have side effects, which kinda defeats what I thought was the 
point of "statements vs expressions"... I guess I just don't 
understand... maybe I'm just thick :-)

> Anyway, the AlternateLambdaSyntax Wiki page has a couple of relevant 
> entries under 'real closures'.

Where is that wiki BTW? I remember looking at it ages ago but can't find 
the link anymore.

--
Donovan Baarda
___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com