Re: Tail recursion to while iteration in 2 easy steps

2013-10-09 Thread Charles Hixson

On 10/08/2013 02:22 AM, Steven D'Aprano wrote:

On Mon, 07 Oct 2013 20:27:13 -0700, Mark Janssen wrote:


But even putting that aside, even if somebody wrote such a
description, it would be reductionism gone mad. What possible light
on the problem would be shined by a long, long list of machine code
operations, even if written using assembly mnemonics?

Only that you've got a consistent, stable (and therefore,
formalizable) translation from your language to the machine.

You are mistaken to think that there is a single, one-to-one, mapping
between high-level code and machine code.

It's not mistaken.

I'm afraid it is. Reality trumps your theory. gcc, clang, Microsoft
Visual Studio, and all the many, many other C compilers do not generate
identical machine code even when targeting the same hardware. This is a
fact. It's not even the case that there is One True Way to implement a
particular program on a given hardware device and compilers merely are
buggy for doing something else. There are often different ways to
implement it which are equally good, the only difference being personal
preference.



Given a stable and formalized language definition,
there should only be continued optimization of the lexical and
procedural constructs into better machine code.

Better than what? Continued optimization? When you say lexical and
procedural constructs, do you mean source code?



In the case of an
interpreted language like Python (which I'll define as a language
which includes a layer of indirection between the user and the machine,

Irrelevant. In the case of Python, there is a machine. The fact that it
is a VM rather than a physical machine is irrelevant. A machine is a
machine -- we could be talking about a Lisp Machine, a Forth Machine, a
x86 processor, an Motorola 68000, an Atom processor, one of those old
Russian mainframes that used three-state trits instead of two-state bits,
or even Babbage's Analytical Engine.

Besides, most modern CPUs don't execute machine code directly, they run
the machine code in a virtual machine implemented in hardware. So the
difference between Python and x86 machine code is just a matter of degree.




encouraging the nice benefits of interactivity), such optimization isn't
really apropos, because it's not the purpose of python to be optimal to
the machine as much as optimal to the programmer.  In any case, while
such optimization can continue over time, they generally create new
compiler releases to indicate such changes.  The one-to-one mapping is
held by the compiler.

Such determinism *defines* the machine, otherwise you might as well get
rid of the notion of computer *science*.  All else is error, akin to
cosmic rays or magic.  Unless the source code changes, all else
remaining equal, the machine code is supposed to be the same, no matter
how many times it is compiled.

That is akin to saying that there is *only one* way to measure the speed
of light (say), standing in exactly the same place, using exactly the
same equipment, using precisely the same measurement techniques, and that
if we allow alternative methods for measuring the speed of light, physics
is no longer a science.



[Only if you use the exact source, compiler, switches, etc]] will the
output be the same.
And even that is not guaranteed.

Oh, and what would cause such non-determinism?

The compiler-writer, of course. A compiler is software, and is written by
a person, who can program it to do anything the writer wants. If the
writer wants the compiler to be non-deterministic, it can be.

Some viruses use a similar technique to try to avoid virus scanners. They
encrypt the payload, which is functionally equivalent to randomizing it
(except it can be reversed if you have the key) so as to defeat virus
scanners.

A more whimsical example: perhaps a mischievous compiler writer included
something like this in her compiler:


when compiling integer multiplication, INT * INT:
   if today is Tuesday:
 emit machine code that does multiplication using repeated addition
   otherwise:
 emit machine code that does multiplication using ADD and SHIFT


Both implementations of multiplication are perfectly valid. There may be
a performance difference, or there may not be. Since no sensible
programming language is going to specify the *detailed* machine code
implementation of its high-level operations, such a mischievous compiler
would still be valid.



Take, for example, the single high-level operation:

sort(alist)

What machine code will be executed? Obviously that will depend on the
sort algorithm used. There are *dozens*. Here are just a few:

Well, since you didn't specify your programming language, you're then
merely stating an English construct.

What difference does it make? But if it will make you feel better, I'm
specifying Hypertalk. You've probably never heard of it, but regardless,
it exists, and it has a sort command, and the high-level language does
not specify which of many sort algorithms is to 

Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Jussi Piitulainen
random...@fastmail.us writes:

 The entire point of tail call optimization requires not keeping the
 intervening stack frames around, in _any_ form, so as to allow
 arbitrarily deep recursion without ever having the possibility of a
 stack overflow. An implementation which reduced but did not
 eliminate the space used per call would not be worthwhile because it
 would not enable the recursive functional programming patterns that
 mandatory tail call optimization allows.
 
 You could require that an optimizable tail call be made explicit.
 Actually, I suspect it might be possible to do this now, by abusing
 exception handling as a control flow mechanism.

Python code already marks many of the functionally relevant tail calls
with 'return'. It just wants to retain the trace.

Another keyword could be used to indicate that the programmer does not
want a stack frame retained. It's probably too much to suggest 'please
return', but how about 'goto return'?

A tail call is a 'goto that passes arguments', and I think 'goto' is a
keyword already.

(Actually I just wanted to suggest 'please return'. Not seriously.)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Jussi Piitulainen
Steven D'Aprano writes:

 Far more useful would be a high-level description of Scheme's
 programming model. If names can be rebound on the fly, how does
 Scheme even tell whether something is a recursive call or not?
 
 def foo(arg):
 do stuff here
 foo(arg-1)  # how does Scheme know that this is the same foo?

In general, it doesn't know. It just calls whatever function is bound
to foo. It does know that the call is in a tail position.

If the compiler has access to all code that can possibly change the
value of foo, it can know simply by proving that there is no such
assignment statement in the code. This can happen if the compiler is
told to assume that it has the whole program. It often happens in a
local scope. Module systems create such local scopes for unexported
variables, and even for exported variables by forbidding assignments
outside.

(I'm not sure if your question was rhetorical or if you were looking
for this information.)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Alain Ketterlin
random...@fastmail.us writes:

 On Mon, Oct 7, 2013, at 13:15, Alain Ketterlin wrote:
 That's fine. My point was: you can't at the same time have full
 dynamicity *and* procedural optimizations (like tail call opt).
 Everybody should be clear about the trade-off.

 Let's be clear about what optimizations we are talking about. Tail call
 optimization, itself, doesn't care _what_ is being called. It can just
 as easily mean erase its own stack frame and replace it with that of
 another function as reassign the arguments and jump to the top of this
 function. Some people have introduced the idea of _further_
 optimizations, transforming near tail recursion (i.e. return self()+1)
 into tail recursion, and _that_ depends on knowing the identity of the
 function (though arguably that could be accounted for at the cost of
 including dead code for the path that assumes it may have been changed),
 but tail call optimization itself does not.

You're right, thanks for the clarification.

-- Alain.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Alain Ketterlin
Antoon Pardon antoon.par...@rece.vub.ac.be writes:

 Op 07-10-13 19:15, Alain Ketterlin schreef:

[...]
 That's fine. My point was: you can't at the same time have full
 dynamicity *and* procedural optimizations (like tail call opt).
 Everybody should be clear about the trade-off.

 Your wrong. Full dynamics is not in contradiction with tail call
 optimisation. Scheme has already done it for years. You can rebind
 names to other functions in scheme and scheme still has working
 tail call optimisatiosn.

See
http://en.wikipedia.org/wiki/Scheme_%28programming_language%29#Lexical_scope

(first sentence, about variable bindings).

-- Alain.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Antoon Pardon
Op 08-10-13 01:50, Steven D'Aprano schreef:
 On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
 
 I challenge you to get
 down to the machine code in scheme and formally describe how it's doing
 both.
 
 For which machine?
 
 Or are you assuming that there's only one machine code that runs on all 
 computing devices?
 
 
 Frankly, asking somebody to *formally* describe a machine code 
 implementation strikes me as confused. Normally formal descriptions are 
 given in terms of abstract operations, often high level operations, 
 sometimes *very* high level, and rarely in terms of low-level flip this 
 bit, copy this byte machine code operations. I'm not sure how one would 
 be expected to generate a formal description of a machine code 
 implementation.
 
 But even putting that aside, even if somebody wrote such a description, 
 it would be reductionism gone mad. What possible light on the problem 
 would be shined by a long, long list of machine code operations, even if 
 written using assembly mnemonics?
 
 Far more useful would be a high-level description of Scheme's programming 
 model. If names can be rebound on the fly, how does Scheme even tell 
 whether something is a recursive call or not?
 
 def foo(arg):
 do stuff here
 foo(arg-1)  # how does Scheme know that this is the same foo?

It doesn't and it doesn't need to. tail call optimisation is not
limited to recursive functions. All tail calls can be optimised,
recurisive call and others.

-- 
Antoon Pardon


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Steven D'Aprano
On Mon, 07 Oct 2013 20:27:13 -0700, Mark Janssen wrote:

 But even putting that aside, even if somebody wrote such a
 description, it would be reductionism gone mad. What possible light
 on the problem would be shined by a long, long list of machine code
 operations, even if written using assembly mnemonics?

 Only that you've got a consistent, stable (and therefore,
 formalizable) translation from your language to the machine.

 You are mistaken to think that there is a single, one-to-one, mapping
 between high-level code and machine code.
 
 It's not mistaken.

I'm afraid it is. Reality trumps your theory. gcc, clang, Microsoft 
Visual Studio, and all the many, many other C compilers do not generate 
identical machine code even when targeting the same hardware. This is a 
fact. It's not even the case that there is One True Way to implement a 
particular program on a given hardware device and compilers merely are 
buggy for doing something else. There are often different ways to 
implement it which are equally good, the only difference being personal 
preference.


 Given a stable and formalized language definition,
 there should only be continued optimization of the lexical and
 procedural constructs into better machine code.

Better than what? Continued optimization? When you say lexical and 
procedural constructs, do you mean source code?


 In the case of an
 interpreted language like Python (which I'll define as a language
 which includes a layer of indirection between the user and the machine,

Irrelevant. In the case of Python, there is a machine. The fact that it 
is a VM rather than a physical machine is irrelevant. A machine is a 
machine -- we could be talking about a Lisp Machine, a Forth Machine, a 
x86 processor, an Motorola 68000, an Atom processor, one of those old 
Russian mainframes that used three-state trits instead of two-state bits, 
or even Babbage's Analytical Engine. 

Besides, most modern CPUs don't execute machine code directly, they run 
the machine code in a virtual machine implemented in hardware. So the 
difference between Python and x86 machine code is just a matter of degree.



 encouraging the nice benefits of interactivity), such optimization isn't
 really apropos, because it's not the purpose of python to be optimal to
 the machine as much as optimal to the programmer.  In any case, while
 such optimization can continue over time, they generally create new
 compiler releases to indicate such changes.  The one-to-one mapping is
 held by the compiler.
 
 Such determinism *defines* the machine, otherwise you might as well get
 rid of the notion of computer *science*.  All else is error, akin to
 cosmic rays or magic.  Unless the source code changes, all else
 remaining equal, the machine code is supposed to be the same, no matter
 how many times it is compiled.

That is akin to saying that there is *only one* way to measure the speed 
of light (say), standing in exactly the same place, using exactly the 
same equipment, using precisely the same measurement techniques, and that 
if we allow alternative methods for measuring the speed of light, physics 
is no longer a science.


[Only if you use the exact source, compiler, switches, etc]] will the
output be the same.
 And even that is not guaranteed.
 
 Oh, and what would cause such non-determinism?

The compiler-writer, of course. A compiler is software, and is written by 
a person, who can program it to do anything the writer wants. If the 
writer wants the compiler to be non-deterministic, it can be.

Some viruses use a similar technique to try to avoid virus scanners. They 
encrypt the payload, which is functionally equivalent to randomizing it 
(except it can be reversed if you have the key) so as to defeat virus 
scanners.

A more whimsical example: perhaps a mischievous compiler writer included 
something like this in her compiler:


when compiling integer multiplication, INT * INT:
  if today is Tuesday:
emit machine code that does multiplication using repeated addition
  otherwise:
emit machine code that does multiplication using ADD and SHIFT


Both implementations of multiplication are perfectly valid. There may be 
a performance difference, or there may not be. Since no sensible 
programming language is going to specify the *detailed* machine code 
implementation of its high-level operations, such a mischievous compiler 
would still be valid.


 Take, for example, the single high-level operation:

 sort(alist)

 What machine code will be executed? Obviously that will depend on the
 sort algorithm used. There are *dozens*. Here are just a few:
 
 Well, since you didn't specify your programming language, you're then
 merely stating an English construct.

What difference does it make? But if it will make you feel better, I'm 
specifying Hypertalk. You've probably never heard of it, but regardless, 
it exists, and it has a sort command, and the high-level language does 
not specify which of many sort 

Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Antoon Pardon
Op 07-10-13 23:27, random...@fastmail.us schreef:
 On Sat, Oct 5, 2013, at 3:39, Antoon Pardon wrote:
 What does this mean?

 Does it mean that a naive implementation would arbitrarily mess up
 stack traces and he wasn't interested in investigating more
 sophisticated implementations?

 Does it mean he just didn't like the idea a stack trace wouldn't be a
 100% represenatation of the active call history?

 Does it mean he investigated more sphisticated implementations but found
 them to have serious short comings that couldn't be remedied easily?
 
 The entire point of tail call optimization requires not keeping the
 intervening stack frames around, in _any_ form, so as to allow
 arbitrarily deep recursion without ever having the possibility of a
 stack overflow. An implementation which reduced but did not eliminate
 the space used per call would not be worthwhile because it would not
 enable the recursive functional programming patterns that mandatory tail
 call optimization allows.

So? What about an implementation that would keep its stackframes
normally until it deteced recursion had occured. From then on it
would only keep what it already had plus the four top stackframes
(assuming it are all tail calls for the moment). This would allow
for a stacktrace of the last four calls and essentially doesn't need
any space per call from then on.

-- 
Antoon Pardon
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-08 Thread Jussi Piitulainen
Alain Ketterlin writes:
 Antoon Pardon writes:
 
  Op 07-10-13 19:15, Alain Ketterlin schreef:
 
 [...]
  That's fine. My point was: you can't at the same time have full
  dynamicity *and* procedural optimizations (like tail call opt).
  Everybody should be clear about the trade-off.
 
  Your wrong. Full dynamics is not in contradiction with tail call
  optimisation. Scheme has already done it for years. You can rebind
  names to other functions in scheme and scheme still has working
  tail call optimisatiosn.
 
 See
 http://en.wikipedia.org/wiki/Scheme_%28programming_language%29#Lexical_scope
 
 (first sentence, about variable bindings).

# ... Scheme is lexically scoped: all possible variable bindings in a
# program unit can be analyzed by reading the text of the program unit
# without consideration of the contexts in which it may be called ...

The actual procedure to be called is still not known at compile time,
in general. It can be a parameter. It needn't even be the value of any
explicit variable (needn't be bound to a name).

def call(f, a):
   ...
   return f(a)  # tail call
   ...

def wev(...):
   ...
   return (fs if c(k) else gs)[k](a)  # tail call
   ...

In the Scheme reports, a variable is said to be bound to a location,
which is lexically apparent to the language processor; the value is
stored in that location, and assignment to the variable means storing
a new value in that location. It works like Python or Java; Python
just has a different way of talking about how it works - binding names
directly to values in a namespace, and rebinding to different values.

However, Scheme processors know that the local variables are not
accessible from anywhere else but the local code, so there are more
opportunities for compile-time analysis. They can optimize many of
those locations away, for example.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Alain Ketterlin
Terry Reedy tjre...@udel.edu writes:

 On 10/4/2013 5:49 AM, Alain Ketterlin wrote:

 I think allowing rebinding of function names is extremely strange,

 Steven already countered the 'is extremely strange' part by showing
 that such rebinding is common, generally useful, and only occasionally
 dodgy and a candidate for being blocked.

I was talking about rebinding a function name, not about rebinding a
name to a function. Steve's message was pretty clear on this: iirc, his
first two cases were binding a new name to an existing callable, the
last case (rebinding len) was in line with what I meant (except his code
saved the original function).

My example was: the code of fact contains a call to fact, which is
not guaranteed to be bound to the function it appears in. And this
prevents any kind of optimization.

 I want to consider here what it would mean to concretely implement the
 abstract notion 'disallow rebinding of function names' and show what
 would be behind calling the idea 'not feasible'.

Again, I'm more concerned about the function than about the name.

And the fact that disallow rebinding of function names is not feasible
in Python is a design choice, not an essential characteristic of every
programming language.

That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.

 1. A 'name' is not a 'function name' unless the name is bound, at
 runtime, to a 'function'.

 2. A 'function' in Python is not just one class but any callable
 object -- with with a __call__ methods. That comprises an unbounded
 set.

Right. Then when you do:

 def myfunc(...): ...

myfunc is bound to an callable object. In my example, I was doing the
equivalent to rebinding myfunc, losing the last reference to that piece
of code:

myfunc = somethingelse

BTW, does the original callable object have a ref counter? Is it garbage
collected in that case? If not, would it be considered a bug?

 3. Python does not mandate how namespaces are implemented. CPython
 uses both dicts and, for function local namespaces, internal C arrays.
 So 'names' in code can become either string keys for dicts or integer
 indexes for arrays.

Well, yes, but that's an implementation detail, no?

 4. Rebinding can be explicit ('='), implicit ('import', 'class',
 def'), *or hidden* (globals('a') = 1; ob.__dict__('a') = 1). The
 hidden' methods are intentional as they are sometimes needed*.  In
 CPython, these forms remain different in the byte code, but it could
 be otherwise. The point is that is may or may not be possible for the
 interpreter to even recognize a 'rebinding' in order to apply any
 rebinding blocking rule.

Sure (that's exactly why I said it is easier to implement). If you were
to disallow rebinding of global function names, you would need a proper
notion of global scope and various categories of names, something almost
all compiled languages have.

 * There is no trick (that I know of) for hidden rebinding of function
 locals and nonlocals (short of using ctypes), but I am not sure that
 this is a language guarantee. However, I an sure that the absence is
 intentional.

 even though it's easier to implement.

 No kidding.

(See above).

-- Alain.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Antoon Pardon

Op 07-10-13 19:15, Alain Ketterlin schreef:

I want to consider here what it would mean to concretely implement the
abstract notion 'disallow rebinding of function names' and show what
would be behind calling the idea 'not feasible'.


Again, I'm more concerned about the function than about the name.

And the fact that disallow rebinding of function names is not feasible
in Python is a design choice, not an essential characteristic of every
programming language.

That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.


Your wrong. Full dynamics is not in contradiction with tail call
optimisation. Scheme has already done it for years. You can rebind
names to other functions in scheme and scheme still has working
tail call optimisatiosn.

--
Antoon Pardon

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Terry Reedy

On 10/7/2013 1:15 PM, Alain Ketterlin wrote:

Terry Reedy tjre...@udel.edu writes:



3. Python does not mandate how namespaces are implemented. CPython
uses both dicts and, for function local namespaces, internal C arrays.
So 'names' in code can become either string keys for dicts or integer
indexes for arrays.


Well, yes, but that's an implementation detail, no?


That is why I switched from 'Python' to 'CPython'. But I note the 
following: in 2.x, 'from mod import *' in a function meant that the 
compile time mapping of name to index could not be used and that a 
fallback to dict was necessary. So another implementation might take the 
easier path and always use a dict for function locals. In 3.x, such 
import are limited to module scope so that functions can always use an 
array and indexes for function locals. So other implementations can take 
the hint and do the same without needing a dict fallback.


In other words, 3.x changed the language to facilitate the 
implementation detail.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread random832
On Mon, Oct 7, 2013, at 13:15, Alain Ketterlin wrote:
 That's fine. My point was: you can't at the same time have full
 dynamicity *and* procedural optimizations (like tail call opt).
 Everybody should be clear about the trade-off.

Let's be clear about what optimizations we are talking about. Tail call
optimization, itself, doesn't care _what_ is being called. It can just
as easily mean erase its own stack frame and replace it with that of
another function as reassign the arguments and jump to the top of this
function. Some people have introduced the idea of _further_
optimizations, transforming near tail recursion (i.e. return self()+1)
into tail recursion, and _that_ depends on knowing the identity of the
function (though arguably that could be accounted for at the cost of
including dead code for the path that assumes it may have been changed),
but tail call optimization itself does not.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread random832
On Sat, Oct 5, 2013, at 3:39, Antoon Pardon wrote:
 What does this mean?
 
 Does it mean that a naive implementation would arbitrarily mess up
 stack traces and he wasn't interested in investigating more
 sophisticated implementations?
 
 Does it mean he just didn't like the idea a stack trace wouldn't be a
 100% represenatation of the active call history?
 
 Does it mean he investigated more sphisticated implementations but found
 them to have serious short comings that couldn't be remedied easily?

The entire point of tail call optimization requires not keeping the
intervening stack frames around, in _any_ form, so as to allow
arbitrarily deep recursion without ever having the possibility of a
stack overflow. An implementation which reduced but did not eliminate
the space used per call would not be worthwhile because it would not
enable the recursive functional programming patterns that mandatory tail
call optimization allows.

You could require that an optimizable tail call be made explicit.
Actually, I suspect it might be possible to do this now, by abusing
exception handling as a control flow mechanism.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread MRAB

On 07/10/2013 18:57, Antoon Pardon wrote:

Op 07-10-13 19:15, Alain Ketterlin schreef:

I want to consider here what it would mean to concretely
implement the abstract notion 'disallow rebinding of function
names' and show what would be behind calling the idea 'not
feasible'.


Again, I'm more concerned about the function than about the name.

And the fact that disallow rebinding of function names is not
feasible in Python is a design choice, not an essential
characteristic of every programming language.

That's fine. My point was: you can't at the same time have full
dynamicity *and* procedural optimizations (like tail call opt).
Everybody should be clear about the trade-off.


Your wrong. Full dynamics is not in contradiction with tail call
optimisation. Scheme has already done it for years. You can rebind
names to other functions in scheme and scheme still has working tail
call optimisatiosn.


Consider this code:

def fact(n, acc=1):
   if n = 1:
   return acc
   return fact(n-1, n*acc)

It compiles to this:

 dis.dis(fact)
  2   0 LOAD_FAST0 (n)
  3 LOAD_CONST   1 (1)
  6 COMPARE_OP   1 (=)
  9 POP_JUMP_IF_FALSE   16

  3  12 LOAD_FAST1 (acc)
 15 RETURN_VALUE

  416 LOAD_GLOBAL  0 (fact)
 19 LOAD_FAST0 (n)
 22 LOAD_CONST   1 (1)
 25 BINARY_SUBTRACT
 26 LOAD_FAST0 (n)
 29 LOAD_FAST1 (acc)
 32 BINARY_MULTIPLY
 33 CALL_FUNCTION2 (2 positional, 0 keyword pair)
 36 RETURN_VALUE

I think that CALL_FUNCTION immediately followed by RETURN_VALUE could
be tail-call optimised by dropping the caller's stack frame provided
that (like in this case) the callee doesn't refer to any name in the
callers stack frame (nonlocal).

You could also consider replacing the caller's stack frame with a
smaller pseudo-frame, perhaps compressing multiple pseudo-frames or
repeated sequences of pseudo-frames into a single pseudo-frame, so that
it could still generate the same traceback as before (or an compressed
traceback like what was discussed on python-ideas in the threads
Compressing excepthook output and Idea: Compressing the stack on the
fly).
--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Piet van Oostrum
Alain Ketterlin al...@dpt-info.u-strasbg.fr writes:

 BTW, does the original callable object have a ref counter? Is it garbage
 collected in that case? If not, would it be considered a bug?

In CPython ALL objects have ref counters.
-- 
Piet van Oostrum p...@vanoostrum.org
WWW: http://pietvanoostrum.com/
PGP key: [8DAE142BE17999C4]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Mark Janssen
 That's fine. My point was: you can't at the same time have full
 dynamicity *and* procedural optimizations (like tail call opt).
 Everybody should be clear about the trade-off.

 Your wrong. Full dynamics is not in contradiction with tail call
 optimisation. Scheme has already done it for years. You can rebind
 names to other functions in scheme and scheme still has working
 tail call optimisatiosn.

Yeah, and this is where two models of computation have been conflated,
creating magical effects, confusing everybody.  I challenge you to get
down to the machine code in scheme and formally describe how it's
doing both.
-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Steven D'Aprano
On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:

 I challenge you to get
 down to the machine code in scheme and formally describe how it's doing
 both.

For which machine?

Or are you assuming that there's only one machine code that runs on all 
computing devices?


Frankly, asking somebody to *formally* describe a machine code 
implementation strikes me as confused. Normally formal descriptions are 
given in terms of abstract operations, often high level operations, 
sometimes *very* high level, and rarely in terms of low-level flip this 
bit, copy this byte machine code operations. I'm not sure how one would 
be expected to generate a formal description of a machine code 
implementation.

But even putting that aside, even if somebody wrote such a description, 
it would be reductionism gone mad. What possible light on the problem 
would be shined by a long, long list of machine code operations, even if 
written using assembly mnemonics?

Far more useful would be a high-level description of Scheme's programming 
model. If names can be rebound on the fly, how does Scheme even tell 
whether something is a recursive call or not?

def foo(arg):
do stuff here
foo(arg-1)  # how does Scheme know that this is the same foo?



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Mark Janssen
 Only that you've got a consistent, stable (and therefore,
 formalizable) translation from your language to the machine.  That's
 all.  Everything else is magic.  Do you know that the Warren
 Abstraction Engine used to power the predicate logic in Prolog into
 machien code for a VonNeumann machine is so complex, no one has
 understood it (or perhaps even verified it)?

Sorry, I mean the Warren Abstraction Machine (or WAM).  I refer you to
www.cvc.uab.es/shared/teach/a25002/wambook.pdf.

Now, one can easily argue that I've gone too far to say no one has
understood it (obviously), so it's very little tongue-in-cheek, but
really, when one tries to pretend that one model of computation can be
substituted for another (arguing *for* the Church-Turing thesis), they
are getting into troubling territory (it is only a thesis,
remember)

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Mark Janssen
On Mon, Oct 7, 2013 at 4:50 PM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
 On Mon, 07 Oct 2013 15:47:26 -0700, Mark Janssen wrote:
 I challenge you to get
 down to the machine code in scheme and formally describe how it's doing
 both.

 For which machine?

Right, I should stop assuming a modern implementation of vonNeumann
architecture (even though that, too, is ambiguous) since I'm talking
about theory, but yet it is relevant.  My demarcation point for
arguments between the scheme way and other procedural languages
(which, apart from Pascal variants, I blithely all the C way) gets
down to differing models of computation which shouldn't get conflated,
even though everyone thinks and lumps it all as computation.  They
simply can't get *practically* translated between one and the other,
even though they are *theoretically* translated between each other all
the time.  Humans, of course know how to translate, but that doesn't
count from the pov of computer *science*.

 Frankly, asking somebody to *formally* describe a machine code
 implementation strikes me as confused. Normally formal descriptions are
 given in terms of abstract operations, often high level operations,
 sometimes *very* high level, and rarely in terms of low-level flip this
 bit, copy this byte machine code operations. I'm not sure how one would
 be expected to generate a formal description of a machine code
 implementation.

It's like this: there *should* be one-to-one mappings between the
various high-level constructs to the machine code, varying only
between different chips (that is the purpose of the compiler after
all), yet for some operations, in languages like scheme, well... I
cannot say what happens...  hence my challenge.

 But even putting that aside, even if somebody wrote such a description,
 it would be reductionism gone mad. What possible light on the problem
 would be shined by a long, long list of machine code operations, even if
 written using assembly mnemonics?

Only that you've got a consistent, stable (and therefore,
formalizable) translation from your language to the machine.  That's
all.  Everything else is magic.  Do you know that the Warren
Abstraction Engine used to power the predicate logic in Prolog into
machien code for a VonNeumann machine is so complex, no one has
understood it (or perhaps even verified it)?   One hardly knows where
these things originate.  But here it gets into dark arts best not
entered into too deeply.  It will turn you mad, like that guy in the
movie pi.

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Steven D'Aprano
On Mon, 07 Oct 2013 17:16:35 -0700, Mark Janssen wrote:

 It's like this: there *should* be one-to-one mappings between the
 various high-level constructs to the machine code, varying only between
 different chips (that is the purpose of the compiler after all), yet for
 some operations, in languages like scheme, well... I cannot say what
 happens...  hence my challenge.
 
 But even putting that aside, even if somebody wrote such a description,
 it would be reductionism gone mad. What possible light on the problem
 would be shined by a long, long list of machine code operations, even
 if written using assembly mnemonics?
 
 Only that you've got a consistent, stable (and therefore, formalizable)
 translation from your language to the machine.

You are mistaken to think that there is a single, one-to-one, mapping 
between high-level code and machine code.

In practice, only if you use the exact same source code, the exact same 
compiler, the exact same version of that compiler, with the exact same 
compiler options, and the exact same version of the language and all its 
libraries, then and only then will you will get the same machine code. 
And even that is not guaranteed.

Take, for example, the single high-level operation:

sort(alist)

What machine code will be executed? Obviously that will depend on the 
sort algorithm used. There are *dozens*. Here are just a few:

http://en.wikipedia.org/wiki/Category:Sorting_algorithms

Now sorting is pretty high level, but the same principle applies to even 
simple operations like multiply two numbers. There are often multiple 
algorithms for performing the operation, and even a single algorithm can 
often be implemented in slightly different ways. Expecting all compilers 
to generate the same machine code is simply naive.

We can use Python to discuss this, since Python includes a compiler. It 
generates byte code, which just means machine code for a virtual machine 
instead of a hardware machine, but the principle is the same. Python even 
has a disassembler, for taking those raw byte codes and turning them into 
human-readable pseudo-assembly for the Python Virtual Machine.

Here is some machine code corresponding to the high-level instructions:

x = 23
y = 42
z = x + y
del x, y


py code = compile('x = 23; y = 42; z = x + y; del x, y', '', 'exec')
py code.co_code
'd\x00\x00Z\x00\x00d\x01\x00Z\x01\x00e\x00\x00e\x01\x00\x17Z\x02\x00[\x00
\x00[\x01\x00d\x02\x00S'


Translated into assembly:

py from dis import dis
py dis(code)
  1   0 LOAD_CONST   0 (23)
  3 STORE_NAME   0 (x)
  6 LOAD_CONST   1 (42)
  9 STORE_NAME   1 (y)
 12 LOAD_NAME0 (x)
 15 LOAD_NAME1 (y)
 18 BINARY_ADD
 19 STORE_NAME   2 (z)
 22 DELETE_NAME  0 (x)
 25 DELETE_NAME  1 (y)
 28 LOAD_CONST   2 (None)
 31 RETURN_VALUE


You should be able to see that there are all sorts of changes that the 
compiler could have made, which would have lead to different machine 
code but with the exact same behaviour. This particular compiler emits 
code for x=23; y=42 in the order that they were given, but that's not 
actually required in this case. The compiler might have reversed the 
order, generating something like:

0 LOAD_CONST   1 (42)
3 STORE_NAME   1 (y)
6 LOAD_CONST   0 (23)
9 STORE_NAME   0 (x)

or even:

0 LOAD_CONST   1 (42)
3 LOAD_CONST   0 (23)
6 STORE_NAME   1 (y)
9 STORE_NAME   0 (x)

without changing the behaviour of the code. Or it might have even 
optimized the entire thing to:

0 LOAD_CONST   0 (65)
3 STORE_NAME   0 (z)


since x and y are temporary variables and a smart compiler could perform 
the computation at compile time and throw them away. (Nitpicks about 
what if x and y already existed? aside.) CPython even does this sort of 
optimization, although in a more limited fashion:

py dis(compile('x = 23 + 42', '', 'exec'))
  1   0 LOAD_CONST   3 (65)
  3 STORE_NAME   0 (x)
  6 LOAD_CONST   2 (None)
  9 RETURN_VALUE

This is called keyhole optimization. It's not beyond possibility for a 
smarter compiler to look beyond the keyhole and make optimizations based 
on the whole program.

So you can see that there is no one-to-one correspondence from high-level 
source code to low-level machine code, even for a single machine. The 
same is even more true for languages such as C, Fortran, Pascal, Lisp, 
Scheme, Haskell, Java, etc. where people have spent years or decades 
working on compiler technology. Compilers differ in the quality and 
efficiency of the machine code they emit and the choices they 

Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Piet van Oostrum
Mark Janssen dreamingforw...@gmail.com writes:

 Yeah, and this is where two models of computation have been conflated,
 creating magical effects, confusing everybody.  I challenge you to get
 down to the machine code in scheme and formally describe how it's
 doing both.

Which two models of computation are you talking about? And what magica; 
effects? AFAIK there is no magic in computer science, although every 
sufficiently advanced ...
-- 
Piet van Oostrum p...@vanoostrum.org
WWW: http://pietvanoostrum.com/
PGP key: [8DAE142BE17999C4]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Piet van Oostrum
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info writes:

 Far more useful would be a high-level description of Scheme's programming 
 model. If names can be rebound on the fly, how does Scheme even tell 
 whether something is a recursive call or not?

Maybe it doesn't have to tell. If you do tail call optimization there is no 
need to do tail recursion optimization.
-- 
Piet van Oostrum p...@vanoostrum.org
WWW: http://pietvanoostrum.com/
PGP key: [8DAE142BE17999C4]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Mark Janssen
 But even putting that aside, even if somebody wrote such a description,
 it would be reductionism gone mad. What possible light on the problem
 would be shined by a long, long list of machine code operations, even
 if written using assembly mnemonics?

 Only that you've got a consistent, stable (and therefore, formalizable)
 translation from your language to the machine.

 You are mistaken to think that there is a single, one-to-one, mapping
 between high-level code and machine code.

It's not mistaken.  Given a stable and formalized language definition,
there should only be continued optimization of the lexical and
procedural constructs into better machine code. In the case of an
interpreted language like Python (which I'll define as a language
which includes a layer of indirection between the user and the
machine, encouraging the nice benefits of interactivity), such
optimization isn't really apropos, because it's not the purpose of
python to be optimal to the machine as much as optimal to the
programmer.  In any case, while such optimization can continue over
time, they generally create new compiler releases to indicate such
changes.  The one-to-one mapping is held by the compiler.

Such determinism *defines* the machine, otherwise you might as well
get rid of the notion of computer *science*.  All else is error, akin
to cosmic rays or magic.  Unless the source code changes, all else
remaining equal, the machine code is supposed to be the same, no
matter how many times it is compiled.

[Only if you use the exact source, compiler, switches, etc]] will the output 
be the same.
 And even that is not guaranteed.

Oh, and what would cause such non-determinism?

 Take, for example, the single high-level operation:

 sort(alist)

 What machine code will be executed? Obviously that will depend on the
 sort algorithm used. There are *dozens*. Here are just a few:

Well, since you didn't specify your programming language, you're then
merely stating an English construct.  As such, there can be no single
mapping from English into the machine, which is why there are so many
different languages and experiments that map your [English] concepts
into source code.

 Now sorting is pretty high level, but the same principle applies to even
 simple operations like multiply two numbers. There are often multiple
 algorithms for performing the operation, and even a single algorithm can
 often be implemented in slightly different ways. Expecting all compilers
 to generate the same machine code is simply naive.

You are both over-simplifying and complexifying things at once.  Pick one.

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-07 Thread Mark Janssen
 Yeah, and this is where two models of computation have been conflated,
 creating magical effects, confusing everybody.  I challenge you to get
 down to the machine code in scheme and formally describe how it's
 doing both.

 Which two models of computation are you talking about? And what magica; 
 effects?

Well, I delineate all computation involving predicates (like lambda
calculus) between those using digital logic (like C).  These realms of
computation are so different, they are akin to mixing the complex
numbers with the real.  Yet hardly anyone points it out (I've
concluded that hardly anyone has ever noticed -- the Church-Turing
thesis has lulled the whole field into a shortcut in thinking which
actually doesn't pan out in practice).

 AFAIK there is no magic in computer science, although every sufficiently 
 advanced ...

Ha!  That's very good.  I'm glad you catch the spirit of my rant.
Any sufficiently advanced compiler can be substituted with magic to
the neophyte without a change in output.  A mini Liskov substitution.

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-05 Thread Antoon Pardon

Op 04-10-13 23:14, Terry Reedy schreef:

On 10/4/2013 6:46 AM, Ian Kelly wrote:


On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the stack traces just because they happened to end in tail calls.


The idea of CPython space-optimizing tail calls when the call is made
has been suggested on python-ideas. Guido verified that it is
technically possible with the current bytecode interpreter but rejected
it because it would arbitrarily mess up stack traces.


What does this mean?

Does it mean that a naive implementation would arbitrarily mess up
stack traces and he wasn't interested in investigating more
sophisticated implementations?

Does it mean he just didn't like the idea a stack trace wouldn't be a
100% represenatation of the active call history?

Does it mean he investigated more sphisticated implementations but found
them to have serious short comings that couldn't be remedied easily?

--
Antoon Pardon
--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread 88888 Dihedral
On Thursday, October 3, 2013 5:33:27 AM UTC+8, Terry Reedy wrote:
 On 10/2/2013 8:31 AM, random...@fastmail.us wrote:
 
  On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
 
  Part of the reason that Python does not do tail call optimization is
 
  that turning tail recursion into while iteration is almost trivial, once
 
  you know the secret of the two easy steps. Here it is.
 
 
 
  That should be a reason it _does_ do it - saying people should rewrite
 
  their functions with loops means declaring that Python is not really a
 
  multi-paradigm programming language but rather rejects functional
 
  programming styles in favor of imperative ones.
 
 
 
 It is true that Python does not encourage the particular functional 
 
 style that is encouraged by auto optimization of tail recursion. A 
 
 different functional style would often use reduce (or fold) instead.
 
 
 
 Some other points I left out in a post of medium length yet brief for 
 
 the topic.
 
 
 
 1. If one starts with body recursion, as is typical, one must consider 
 
 commutativity (possibly associativity) of the 'inner' operator in any 
 
 conversion.
 
 
 
 2. Instead of converting to tail recursion, one might convert to while 
 
 iteration directly.
 
 
 
 3. One often 'polishes' the while form in a way that cannot be done 
 
 automatically.
 
 
 
 4. While loops are actually rare in idiomatic Python code. In Python, 
 
 for loops are the standard way to linearly process a collection. The 
 
 final version I gave for a factorial while loop,
 
 
 
 def fact_while(n):
 
if n  0 or n != int(n):
 
  raise ValueError('fact input {} is not a count'.format(n))
 
fac = 1
 
while n  1:
 
  fac *= n
 
  n -= 1
 
return fac
 
 
 
 should better be written with a for loop:
 

As I pointed out before, an accelerated  version without the limit 
of the stack depth for computing
facotrials can be obtained 
by storing a list of products of primes
first.

Of course integer divisions are 
required to transform the to stack
depth problem into the size of the 
32-64 bit heap space. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Alain Ketterlin
Mark Janssen dreamingforw...@gmail.com writes:

 def fact(n): return 1 if n = 1 else n * fact(n-1)

 class Strange:
   ...
   def __le__(dummy):
 global fact
 fact = someotherfun # this is binding
 return false

 You cannot prevent this in python.

 No, but you can't prevent a lot of bad moves in python.  What you just
 did there is a total bonehead (strange?) of an idea.

I think allowing rebinding of function names is extremely strange, even
though it's easier to implement. I tend to agree with you on the quality
of the idea, but optimization has to take this into account (and give
up, usually).

-- Alain.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Duncan Booth
Neil Cerutti ne...@norwich.edu wrote:
 On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
 It isn't hard to imagine adding a TAIL_CALL opcode to the
 interpreter that checks whether the function to be called is
 the same as the current function and if it is just updates the
 arguments and jumps to the start of the code block.
 
 Tail call optimization doesn't involve verification that the
 function is calling itself; you just have to verfify that the
 call is in tail position.

You misunderstood me. As usually implemented tail call recursion doesn't 
require verifying that the function is calling itself, but in Python the 
function could be rebound so a check would be a necessary protection 
against this unlikely situation. Consider this Python:

@some_decorator
def fact(n, acc=1):
   if n = 1:
   return acc
   return fact(n-1, n*acc)

Is that tail recursion or not?

You cannot tell whether or not that is tail recursion unless you know the 
definition of 'some_decorator' and even then the answer may vary from run 
to run.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Ian Kelly
On Fri, Oct 4, 2013 at 4:16 AM, Duncan Booth
duncan.booth@invalid.invalid wrote:
 Neil Cerutti ne...@norwich.edu wrote:
 On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
 It isn't hard to imagine adding a TAIL_CALL opcode to the
 interpreter that checks whether the function to be called is
 the same as the current function and if it is just updates the
 arguments and jumps to the start of the code block.

 Tail call optimization doesn't involve verification that the
 function is calling itself; you just have to verfify that the
 call is in tail position.

 You misunderstood me. As usually implemented tail call recursion doesn't
 require verifying that the function is calling itself, but in Python the
 function could be rebound so a check would be a necessary protection
 against this unlikely situation. Consider this Python:

 @some_decorator
 def fact(n, acc=1):
if n = 1:
return acc
return fact(n-1, n*acc)

 Is that tail recursion or not?

 You cannot tell whether or not that is tail recursion unless you know the
 definition of 'some_decorator' and even then the answer may vary from run
 to run.

There is no doubt that it's a tail call.  Whether it is recursion is
irrelevant to optimizing it.  The reason we talk about tail call
recursion specifically is because the recursive case is the one that
makes the optimization worthwhile, not because the recursion is
necessary to perform the optimization.

It is possible that fact is recursive but that some_decorator adds
additional stack frames that are not tail calls and not optimizable.
If this were the case, then the recursion would still eventually hit
the stack limit, but there would still be benefit in optimizing the
fact frames, as it would increase the allowable depth before the
stack limit is reached.  So I see no reason to check the identity of
the called function before optimizing the tail call.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Ian Kelly
On Fri, Oct 4, 2013 at 4:41 AM, Ian Kelly ian.g.ke...@gmail.com wrote:
 There is no doubt that it's a tail call.  Whether it is recursion is
 irrelevant to optimizing it.  The reason we talk about tail call
 recursion specifically is because the recursive case is the one that
 makes the optimization worthwhile, not because the recursion is
 necessary to perform the optimization.

 It is possible that fact is recursive but that some_decorator adds
 additional stack frames that are not tail calls and not optimizable.
 If this were the case, then the recursion would still eventually hit
 the stack limit, but there would still be benefit in optimizing the
 fact frames, as it would increase the allowable depth before the
 stack limit is reached.  So I see no reason to check the identity of
 the called function before optimizing the tail call.

On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the stack traces just because they happened to end in tail calls.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread rusi
On Thursday, October 3, 2013 10:57:48 PM UTC+5:30, Ravi Sahni wrote:
 On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
  4. There is a whole spectrum of such optimizaitons --
  4a eg a single-call structural recursion example, does not need to push 
  return address on the stack. It only needs to store the recursion depth:
 
  If zero jump to outside return add; if  0 jump to internal return address
 
  4b An example like quicksort in which one call is a tail call can be 
  optimized with your optimization and the other, inner one with 4a above
 
 
 I am interested in studying more this 'whole spectrum of optimizations'
 Any further pointers?

Mmm… Bummer…

There was a book having these -- at least 5-6 different optimizations of which 
tail-call was the most obvious. Maybe author was McGettrick dont remember for 
sure... Probably 20 years now!

I'll try and re-remember them and post.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Steven D'Aprano
On Fri, 04 Oct 2013 11:49:26 +0200, Alain Ketterlin wrote:

 I think allowing rebinding of function names is extremely strange,

It's not, it's quite common. Functions in Python are first-class values, 
and we can do things like this:

from somelibrary import somethingwithalonglongname as shortname

def inorder(tree, op=print):
# Walk the tree in infix order, doing op to each node.
process(tree.left, op)
op(tree.payload)
process(tree.right, op)

_len = len
def len(obj):
do_something_first()
return _len(obj)


Now, the first two aren't the same sort of function-rebinding that you're 
talking about, or that were shown in your post, but the third is. What is 
unusual though is rebinding a function from within itself:

def func(arg):
global func
do_this(arg)
def func(arg):
do_that(arg)


but it's legal and it sometimes can be useful, although it counts as 
clever code, possibly too clever.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Jussi Piitulainen
Duncan Booth writes:

 Neil Cerutti wrote:
  On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
  It isn't hard to imagine adding a TAIL_CALL opcode to the
  interpreter that checks whether the function to be called is
  the same as the current function and if it is just updates the
  arguments and jumps to the start of the code block.
  
  Tail call optimization doesn't involve verification that the
  function is calling itself; you just have to verfify that the
  call is in tail position.
 
 You misunderstood me. As usually implemented tail call recursion
 doesn't require verifying that the function is calling itself, but
 in Python the function could be rebound so a check would be a
 necessary protection against this unlikely situation. Consider this
 Python:
 
 @some_decorator
 def fact(n, acc=1):
if n = 1:
return acc
return fact(n-1, n*acc)
 
 Is that tail recursion or not?
 
 You cannot tell whether or not that is tail recursion unless you
 know the definition of 'some_decorator' and even then the answer may
 vary from run to run.

Ignoring the decorator, fact(n-1, n*acc) is in a tail position because
it follows the keyword return. It doesn't matter what function fact is
at the time: the remaining action in the caller is to return.

Tail positions, with respect to enclosing code, are defined
syntactically.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Duncan Booth
Ian Kelly ian.g.ke...@gmail.com wrote:

 On Fri, Oct 4, 2013 at 4:41 AM, Ian Kelly ian.g.ke...@gmail.com wrote:
 There is no doubt that it's a tail call.  Whether it is recursion is
 irrelevant to optimizing it.  The reason we talk about tail call
 recursion specifically is because the recursive case is the one that
 makes the optimization worthwhile, not because the recursion is
 necessary to perform the optimization.

 It is possible that fact is recursive but that some_decorator adds
 additional stack frames that are not tail calls and not optimizable.
 If this were the case, then the recursion would still eventually hit
 the stack limit, but there would still be benefit in optimizing the
 fact frames, as it would increase the allowable depth before the
 stack limit is reached.  So I see no reason to check the identity of
 the called function before optimizing the tail call.
 
 On the other hand, if you start optimizing every tail call and not
 just the recursive functions, then I can see where that could start to
 get problematic for debugging -- as arbitrary functions get removed
 from the stack traces just because they happened to end in tail calls.

Quite so. Losing some stack frames in the traceback because tail recursion 
was optimised is probably no big deal. Losing arbitrary stack frames 
because of a more widespread tail call optimisation would not IMHO fit with 
the spirit of Python except possibly as an optional optimisation that was 
off by default.


-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Terry Reedy

On 10/4/2013 6:46 AM, Ian Kelly wrote:


On the other hand, if you start optimizing every tail call and not
just the recursive functions, then I can see where that could start to
get problematic for debugging -- as arbitrary functions get removed
from the stack traces just because they happened to end in tail calls.


The idea of CPython space-optimizing tail calls when the call is made 
has been suggested on python-ideas. Guido verified that it is 
technically possible with the current bytecode interpreter but rejected 
it because it would arbitrarily mess up stack traces.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-04 Thread Terry Reedy

On 10/4/2013 5:49 AM, Alain Ketterlin wrote:


I think allowing rebinding of function names is extremely strange,


Steven already countered the 'is extremely strange' part by showing that 
such rebinding is common, generally useful, and only occasionally dodgy 
and a candidate for being blocked.


I want to consider here what it would mean to concretely implement the 
abstract notion 'disallow rebinding of function names' and show what 
would be behind calling the idea 'not feasible'.


1. A 'name' is not a 'function name' unless the name is bound, at 
runtime, to a 'function'.


2. A 'function' in Python is not just one class but any callable object 
-- with with a __call__ methods. That comprises an unbounded set.


3. Python does not mandate how namespaces are implemented. CPython uses 
both dicts and, for function local namespaces, internal C arrays. So 
'names' in code can become either string keys for dicts or integer 
indexes for arrays.


4. Rebinding can be explicit ('='), implicit ('import', 'class', 'def'), 
*or hidden* (globals('a') = 1; ob.__dict__('a') = 1). The 'hidden' 
methods are intentional as they are sometimes needed*.  In CPython, 
these forms remain different in the byte code, but it could be 
otherwise. The point is that is may or may not be possible for the 
interpreter to even recognize a 'rebinding' in order to apply any 
rebinding blocking rule.


* There is no trick (that I know of) for hidden rebinding of function 
locals and nonlocals (short of using ctypes), but I am not sure that 
this is a language guarantee. However, I an sure that the absence is 
intentional.


 even though it's easier to implement.

No kidding.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread random832
On Wed, Oct 2, 2013, at 17:33, Terry Reedy wrote:
 5. Conversion of apparent recursion to iteration assumes that the 
 function really is intended to be recursive.  This assumption is the 
 basis for replacing the recursive call with assignment and an implied 
 internal goto. The programmer can determine that this semantic change is 
 correct; the compiler should not assume that. (Because of Python's late 
 name-binding semantics, recursive *intent* is better expressed in Python 
 with iterative syntax than function call syntax. )

Speaking of assumptions, I would almost say that we should make the
assumption that operators (other than the __i family, and
setitem/setattr/etc) are not intended to have visible side effects. This
would open a _huge_ field of potential optimizations - including that
this would no longer be a semantic change (since relying on one of the
operators being allowed to change the binding of fact would no longer be
guaranteed).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread random832
On Wed, Oct 2, 2013, at 21:46, MRAB wrote:
  The difference is that a tuple can be reused, so it makes sense for the
  comiler to produce it as a const.  (Much like the interning of small
  integers)  The list, however, would always have to be copied from the
  compile-time object.  So that object itself would be a phantom, used
  only as the template with which the list is to be made.
 
 The key point here is that the tuple is immutable, including its items.

Hey, while we're on the subject, can we talk about frozen(set|dict)
literals again? I really don't understand why this discussion fizzles
out whenever it's brought up on python-ideas.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread random832
On Wed, Oct 2, 2013, at 22:34, Steven D'Aprano wrote:
 You are both assuming that LOAD_CONST will re-use the same tuple 
 (1, 2, 3) in multiple places. But that's not the case, as a simple test 
 will show you:

 def f():
...   return (1, 2, 3)
 f() is f()
True

It does, in fact, re-use it when it is _the same LOAD_CONST
instruction_.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Duncan Booth
Alain Ketterlin al...@dpt-info.u-strasbg.fr wrote:

 Terry Reedy tjre...@udel.edu writes:
 
 Part of the reason that Python does not do tail call optimization is
 that turning tail recursion into while iteration is almost trivial,
 once you know the secret of the two easy steps. Here it is.

 Assume that you have already done the work of turning a body recursive
 ('not tail recursive') form like

 def fact(n): return 1 if n = 1 else n * fact(n-1)

 into a tail recursion like
 [...]
 
 How do know that either = or * didn't rebind the name fact to
 something else? I think that's the main reason why python cannot apply
 any procedural optimization (even things like inlining are impossible,
 or possible only under very conservative assumption, that make it
 worthless).
 

That isn't actually sufficient reason.

It isn't hard to imagine adding a TAIL_CALL opcode to the interpreter that 
checks whether the function to be called is the same as the current 
function and if it is just updates the arguments and jumps to the start of 
the code block. If the function doesn't match it would simply fall through 
to doing the same as the current CALL_FUNCTION opcode.

There is an issue that you would lose stack frames in any traceback. Also 
it means code for this modified Python wouldn't run on other non-modified 
interpreters, but it is at least theoretically possible without breaking 
Python's assumptions.

-- 
Duncan Booth http://kupuguy.blogspot.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Neil Cerutti
On 2013-10-03, Duncan Booth duncan.booth@invalid.invalid wrote:
 How do know that either = or * didn't rebind the name
 fact to something else? I think that's the main reason why
 python cannot apply any procedural optimization (even things
 like inlining are impossible, or possible only under very
 conservative assumption, that make it worthless).

 That isn't actually sufficient reason.

 It isn't hard to imagine adding a TAIL_CALL opcode to the
 interpreter that checks whether the function to be called is
 the same as the current function and if it is just updates the
 arguments and jumps to the start of the code block.

Tail call optimization doesn't involve verification that the
function is calling itself; you just have to verfify that the
call is in tail position.

The current frame would be removed from the stack frame and
replaced with the one that results from calling the function.

 There is an issue that you would lose stack frames in any
 traceback.

I don't think that's a major issue. Frames that got replaced
would quite uninteresting. 

 Also it means code for this modified Python wouldn't run on
 other non-modified interpreters, but it is at least
 theoretically possible without breaking Python's assumptions.

In any case it's so easy to implement yourself I'm not sure
there's any point.

-- 
Neil Cerutti
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Ravi Sahni
On Wed, Oct 2, 2013 at 10:46 AM, rusi wrote:
 4. There is a whole spectrum of such optimizaitons --
 4a eg a single-call structural recursion example, does not need to push 
 return address on the stack. It only needs to store the recursion depth:

 If zero jump to outside return add; if  0 jump to internal return address

 4b An example like quicksort in which one call is a tail call can be 
 optimized with your optimization and the other, inner one with 4a above

I am interested in studying more this 'whole spectrum of optimizations'
Any further pointers?

Thanks

-- 
Ravi
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Terry Reedy

On 10/2/2013 10:34 PM, Steven D'Aprano wrote:


You are both assuming that LOAD_CONST will re-use the same tuple
(1, 2, 3) in multiple places.


No I did not. To save tuple creation time, a pre-compiled tuple is 
reused when its display expression is re-executed. If I had been 
interested in multiple occurrences of the same display, I would have tested.


 def f():
a = 1,'a',, 'bbb'; x = 1,'a',, 'bbb'
b = 1,'a',, 'bbb'
c = 'a'
d =  + 

 f.__code__.co_consts
(None, 1, 'a', , 'bbb', (1, 'a', , 'bbb'), (1, 'a', , 
'bbb'), (1, 'a', , 'bbb'), )


Empirically, ints and strings are checked for prior occurrence in 
co_consts before being added. I suspect None is too, but will not assume.


How is the issue of multiple occurrences of constants relevant to my 
topic statement? Let me quote it, with misspellings corrected.


CPython core developers have been very conservative about what
transformations they put into the compiler. [misspellings corrected]

Aha! Your example and that above reinforce this statement. Equal tuples 
are not necessarily identical and cannot necessarily be substituted for 
each other in all code.


 (1, 2) == (1.0, 2.0)
True

But replacing (1.0, 2.0) with (1, 2), by only storing the latter, would 
not be valid without some possibly tricky context analysis. The same is 
true for equal numbers, and the optimizer pays attention.


 def g():
a = 1
b = 1.0

 g.__code__.co_consts
(None, 1, 1.0)

For numbers, the proper check is relatively easy:

for item in const_list:
  if type(x) is type(item) and x == item:
break  # identical item already in working list
else:
  const_list.append(x)

Writing a valid recursive function to do the same for tuples, and 
proving its validity to enough other core developers to make it 
accepted, is much harder and hardly seems worthwhile.


It would probably be easier to compare the parsed AST subtrees for the 
displays rather than the objects created from them.


---
 py def f():
 ... a = (1, 2, 3)
 ... b = (1, 2, 3)
[snip]
 So even though both a and b are created by the same LOAD_CONST 
byte-code,


I am not sure what you mean by 'created'. LOAD_CONST puts the address of 
an object in co_consts on the top of the virtual machine stack.


 the object is not re-used (although it could be!)

It can be reused, in this case, because the constant displays are 
identical, as defined above.


 and two distinct tuples are created.

Because it is not easy to make the compiler see that only one is needed.

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Literal syntax for frozenset, frozendict (was: Tail recursion to while iteration in 2 easy steps)

2013-10-03 Thread Ben Finney
random...@fastmail.us writes:

 Hey, while we're on the subject, can we talk about frozen(set|dict)
 literals again? I really don't understand why this discussion fizzles
 out whenever it's brought up on python-ideas.

Can you start us off by searching for previous threads discussing it,
and summarise the arguments here?

-- 
 \ “If you ever catch on fire, try to avoid seeing yourself in the |
  `\mirror, because I bet that's what REALLY throws you into a |
_o__) panic.” —Jack Handey |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Steven D'Aprano
On Wed, 02 Oct 2013 22:41:00 -0400, Terry Reedy wrote:

 I am referring to constant-value objects included in the code object.
   def f(): return (1,2,3)
 
   f.__code__.co_consts
 (None, 1, 2, 3, (1, 2, 3))

Okay, now that's more clear. I didn't understand what you meant before. 
So long as we understand we're talking about a CPython implementation 
detail.


 None is present as the default return, even if not needed for a
 particular function. Every literal is also tossed in, whether needed or
 not.
 
 which in Python 3.3 understands tuples like (1, 2, 3), but not lists.
 
 The byte-code does not understand anything about types. LOAD_CONST n
 simply loads the (n+1)st object in .co_consts onto the top of the stack.

Right, this is more clear to me now.

As I understand it, the contents of code objects are implementation 
details, not required for implementations. For example, IronPython 
provides a co_consts attribute, but it only contains None. Jython doesn't 
provide a co_consts attribute at all. So while it's interesting to 
discuss what CPython does, we should not be fooled into thinking that 
this is guaranteed by every Python.

I can imagine a Python implementation that compiles constants into some 
opaque object like __closure__ or co_code. In that case, it could treat 
the list in for i in [1, 2, 3]: ... as a constant too, since there is 
no fear that some other object could reach into the opaque object and 
change it.

Of course, that would likely be a lot of effort for very little benefit. 
The compiler would have to be smart enough to see that the list was never 
modified or returned. Seems like a lot of trouble to go to just to save 
creating a small list.

More likely would be implementations that didn't re-use constants, than 
implementations that aggressively re-used everything possible.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-03 Thread Steven D'Aprano
On Thu, 03 Oct 2013 10:09:25 -0400, random832 wrote:

 Speaking of assumptions, I would almost say that we should make the
 assumption that operators (other than the __i family, and
 setitem/setattr/etc) are not intended to have visible side effects. This
 would open a _huge_ field of potential optimizations - including that
 this would no longer be a semantic change (since relying on one of the
 operators being allowed to change the binding of fact would no longer be
 guaranteed).

I like the idea of such optimizations, but I'm afraid that your last 
sentence seems a bit screwy to me. You seem to be saying, if we make this 
major semantic change to Python, we can then retroactively declare that 
it's not a semantic change at all, since under the new rules, it's no 
different from the new rules.

Anyway... I think that it's something worth investigating, but it's not 
as straight forward as you might hope. There almost certainly is code out 
in the world that uses operator overloading for DSLs. For instance, I've 
played around something vaguely like this DSL:

chain = Node('spam') 
chain  'eggs'
chain  'ham'
chain.head = 'cheese'

where I read  as appending and = as inserting. I was never quite happy 
with the syntax, so my experiments never went anywhere, but I expect that 
some people, somewhere, have. This is a legitimate way to use Python, and 
changing the semantics to prohibit it would be a Bad Thing.

However, I can imagine something like a __future__ directive that 
enables, or disables, such optimizations on a per-module basis. In Python 
3, it would have to be disabled by default. Python 4000 could make the 
optimizations enabled by default and use the __future__ machinery to 
disable it.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Alain Ketterlin
Terry Reedy tjre...@udel.edu writes:

 Part of the reason that Python does not do tail call optimization is
 that turning tail recursion into while iteration is almost trivial,
 once you know the secret of the two easy steps. Here it is.

 Assume that you have already done the work of turning a body recursive
 ('not tail recursive') form like

 def fact(n): return 1 if n = 1 else n * fact(n-1)

 into a tail recursion like
[...]

How do know that either = or * didn't rebind the name fact to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or possible only under very conservative assumption, that make it
worthless).

-- Alain.

P/S: don't take me wrong; your explanation is excellent (and very useful
to python programmers). What I say is that it relies on assumptions that
do not apply to python.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Alain Ketterlin
rusi rustompm...@gmail.com writes:

 On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
 Part of the reason that Python does not do tail call optimization is 
 that turning tail recursion into while iteration is almost trivial, once 
 you know the secret of the two easy steps. Here it is.

 What happens for mutual tail recursive code like this

 def isodd(x) : return False if x==0 else iseven(x-1)
 def iseven(x): return True if x==0 else isodd(x-1)

It takes one step of inlining to make Terry's technique applicable.

(Actually, out of curiosity, I tried this with gcc 4.6.3: the compiler
does 16 levels of inlining, plus tail call optimization. The final code
has no call.)

-- Alain.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread random832
On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
 Part of the reason that Python does not do tail call optimization is 
 that turning tail recursion into while iteration is almost trivial, once 
 you know the secret of the two easy steps. Here it is.

That should be a reason it _does_ do it - saying people should rewrite
their functions with loops means declaring that Python is not really a
multi-paradigm programming language but rather rejects functional
programming styles in favor of imperative ones.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread rusi
On Wednesday, October 2, 2013 1:53:46 PM UTC+5:30, Alain Ketterlin wrote:
 rusi writes:
 
 
 
  On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
  Part of the reason that Python does not do tail call optimization is 
  that turning tail recursion into while iteration is almost trivial, once 
  you know the secret of the two easy steps. Here it is.
 
 
  What happens for mutual tail recursive code like this
 
  def isodd(x) : return False if x==0 else iseven(x-1)
  def iseven(x): return True if x==0 else isodd(x-1)
 
 
 It takes one step of inlining to make Terry's technique applicable.

Well I did say my example was trivial!
For a more nontrivial case consider the delta function of an FSM.

The cleanest way of doing it in C is to make one function with a goto label for 
each state and a goto for each transition

The same thing can be done in a TR optimized language like scheme by making 
each state into a function and each transition into a TR-call

 
 
 
 (Actually, out of curiosity, I tried this with gcc 4.6.3: the compiler
 does 16 levels of inlining, plus tail call optimization. The final code
 has no call.)

Good to know. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Steven D'Aprano
On Wed, 02 Oct 2013 08:31:25 -0400, random832 wrote:

 On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:
 Part of the reason that Python does not do tail call optimization is
 that turning tail recursion into while iteration is almost trivial,
 once you know the secret of the two easy steps. Here it is.
 
 That should be a reason it _does_ do it - saying people should rewrite
 their functions with loops means declaring that Python is not really a
 multi-paradigm programming language but rather rejects functional
 programming styles in favor of imperative ones.

Python is not as aggressively functional as (say) Haskell, but it is 
surely an exaggeration to suggest that the failure to include tail call 
optimization means that Python rejects functional programming styles. 
You can still write you code in a functional style, it just won't be as 
heavily optimized.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread random832
On Wed, Oct 2, 2013, at 9:32, Steven D'Aprano wrote:
 Python is not as aggressively functional as (say) Haskell, but it is 
 surely an exaggeration to suggest that the failure to include tail call 
 optimization means that Python rejects functional programming styles. 
 You can still write you code in a functional style, it just won't be as 
 heavily optimized.

IMO, tail call optimization is essential to writing in a functional
style, since otherwise you end up with a stack overflow error on any
input above a trivial size.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Mark Janssen
 def fact(n): return 1 if n = 1 else n * fact(n-1)

 into a tail recursion like
 [...]

 How do know that either = or * didn't rebind the name fact to
 something else? I think that's the main reason why python cannot apply
 any procedural optimization (even things like inlining are impossible,
 or possible only under very conservative assumption, that make it
 worthless).

It's called operator precedence.

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Steven D'Aprano
On Wed, 02 Oct 2013 10:04:49 -0400, random832 wrote:

 On Wed, Oct 2, 2013, at 9:32, Steven D'Aprano wrote:
 Python is not as aggressively functional as (say) Haskell, but it is
 surely an exaggeration to suggest that the failure to include tail call
 optimization means that Python rejects functional programming styles.
 You can still write you code in a functional style, it just won't be as
 heavily optimized.
 
 IMO, tail call optimization is essential to writing in a functional
 style, since otherwise you end up with a stack overflow error on any
 input above a trivial size.

Hardly essential. Here's some functional code that doesn't rely on tail-
call optimization for efficiency:

result = map(some_function, some_iterator)

In Python 3 that even uses lazy evaluation, for free.

Or you can increase the amount of memory available in the stack. Another 
alternative would be for the compiler to convert your recursive code into 
iterative code. Well, that wouldn't work for Python.

Not all functional code is recursive, and not all recursive functional 
code is tail-recursive. So while Python's lack of tail-call optimization 
is a failure of Best Practice functional *implementation*, it doesn't 
prevent you from writing in a functional *style*.

Ultimately, Python does not pretend to be a pure-functional language. If 
you want Haskell, you know where to get it. Python steals a few good 
ideas from functional programming, like comprehensions and lazy-evaluated 
iterators, provides a few functional programming constructs like map, 
reduce, and filter, and gives you first-class functions as values. You 
can write code in a functional style in Python, but don't mistake that 
for Python being a functional language.

-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Mark Janssen
On Wed, Oct 2, 2013 at 1:23 PM, Alain Ketterlin al...@unistra.fr wrote:
 On 10/02/2013 08:59 PM, Mark Janssen wrote:

 def fact(n): return 1 if n = 1 else n * fact(n-1)

 How do know that either = or * didn't rebind the name fact to
 something else? I think that's the main reason why python cannot apply
 any procedural optimization

 It's called operator precedence.

 Operator precedence is totally irrelevant here, you misunderstand what
 bind means.

 Imagine that you call fact(x) where x is an instance of the following class:

 class Strange:
   ...
   def __le__(dummy):
 global fact
 fact = someotherfun # this is binding
 return false

 i.e., executing n=1 calls __le__, which rebinds the name fact to
 someting else. Then, there is no recursive call at all. At the time
 fact(x-1) is executed, fact is not the same function any more.

 You cannot prevent this in python.


No, but you can't prevent a lot of bad moves in python.  What you just
did there is a total bonehead (strange?) of an idea.

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Terry Reedy

On 10/2/2013 8:31 AM, random...@fastmail.us wrote:

On Tue, Oct 1, 2013, at 17:30, Terry Reedy wrote:

Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial, once
you know the secret of the two easy steps. Here it is.


That should be a reason it _does_ do it - saying people should rewrite
their functions with loops means declaring that Python is not really a
multi-paradigm programming language but rather rejects functional
programming styles in favor of imperative ones.


It is true that Python does not encourage the particular functional 
style that is encouraged by auto optimization of tail recursion. A 
different functional style would often use reduce (or fold) instead.


Some other points I left out in a post of medium length yet brief for 
the topic.


1. If one starts with body recursion, as is typical, one must consider 
commutativity (possibly associativity) of the 'inner' operator in any 
conversion.


2. Instead of converting to tail recursion, one might convert to while 
iteration directly.


3. One often 'polishes' the while form in a way that cannot be done 
automatically.


4. While loops are actually rare in idiomatic Python code. In Python, 
for loops are the standard way to linearly process a collection. The 
final version I gave for a factorial while loop,


def fact_while(n):
  if n  0 or n != int(n):
raise ValueError('fact input {} is not a count'.format(n))
  fac = 1
  while n  1:
fac *= n
n -= 1
  return fac

should better be written with a for loop:

def fact_for(n):
  if n  0 or n != int(n):
raise ValueError('fact input {} is not a count'.format(n))
  fac = 1:
  for i in range(2, n):
fac *= n

When the input to a function is an iterable instead of n, the iterable 
should be part of the for loop source expression. For loops are 
integrated with Python's iterator protocol in much the same way that 
recursion is integrated with list first:rest pattern matching in some 
functional languages. It is true that Python encourages the use of for 
loops and for clauses in comprehensions (a functional construct).


5. Conversion of apparent recursion to iteration assumes that the 
function really is intended to be recursive.  This assumption is the 
basis for replacing the recursive call with assignment and an implied 
internal goto. The programmer can determine that this semantic change is 
correct; the compiler should not assume that. (Because of Python's late 
name-binding semantics, recursive *intent* is better expressed in Python 
with iterative syntax than function call syntax. )


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Mark Janssen
 Part of the reason that Python does not do tail call optimization is
 that turning tail recursion into while iteration is almost trivial, once
 you know the secret of the two easy steps. Here it is.

 That should be a reason it _does_ do it - saying people should rewrite
 their functions with loops means declaring that Python is not really a
 multi-paradigm programming language but rather rejects functional
 programming styles in favor of imperative ones.

Yes, but that's fine.  A PL language that includes every programming
paradigm would be a total mess, if even possible.  Python has
functional programming where it does not conflict with its overall
design.  The only place I find that this is not the case is with
lambda, but that is now adequately fixed with the addition of the
ternary operator.

-- 
MarkJ
Tacoma, Washington
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Terry Reedy

On 10/2/2013 4:17 AM, Alain Ketterlin wrote:

Terry Reedy tjre...@udel.edu writes:


Part of the reason that Python does not do tail call optimization is
that turning tail recursion into while iteration is almost trivial,
once you know the secret of the two easy steps. Here it is.

Assume that you have already done the work of turning a body recursive
('not tail recursive') form like

def fact(n): return 1 if n = 1 else n * fact(n-1)


As I said in response to randomxxx, even this 0th step (recursion to 
recursion transformation) requires assumptions or carefully reasoning 
about the properties of the operations.



into a tail recursion like

[...]

How do know that either = or * didn't rebind the name fact to
something else? I think that's the main reason why python cannot apply
any procedural optimization (even things like inlining are impossible,
or possible only under very conservative assumption, that make it
worthless).

-- Alain.

P/S: don't take me wrong; your explanation is excellent (and very useful
to python programmers). What I say is that it relies on assumptions that
do not apply to python.


Program transformations (usually intended to be optimizations), whether 
automatic or manual, are infamous for being buggy in not always being 
correct because of hidden assumptions that are not always true.


CPython core developers have be very conservative about what 
tranformations they put into the compiler. (1,2,3) can always be 
compiled as a constant, and so it is. [1,2,3] might or might not be a 
constant, depending on the context, and no attempt is made to analyze that.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Steven D'Aprano
On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:

 CPython core developers have be very conservative about what
 tranformations they put into the compiler. (1,2,3) can always be
 compiled as a constant, and so it is. [1,2,3] might or might not be a
 constant, depending on the context, and no attempt is made to analyze
 that.

The first sentence of this is correct. The next two don't quite make 
sense to me, since I don't understand what you mean by constant in this 
context. I *think* you might be referring to the LOAD_CONST byte-code, 
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So 
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST 
call:

py from dis import dis
py dis(compile(x = (1, 2, 3), '', 'exec'))
  1   0 LOAD_CONST   4 ((1, 2, 3))
  3 STORE_NAME   0 (x)
  6 LOAD_CONST   3 (None)
  9 RETURN_VALUE


while a literal [1, 2, 3] does not:


py dis(compile(x = [1, 2, 3], '', 'exec'))
  1   0 LOAD_CONST   0 (1)
  3 LOAD_CONST   1 (2)
  6 LOAD_CONST   2 (3)
  9 BUILD_LIST   3
 12 STORE_NAME   0 (x)
 15 LOAD_CONST   3 (None)
 18 RETURN_VALUE


But I don't think this is a necessary language limitation. Both (1, 2, 3) 
and [1, 2, 3] are known at compile time: the first cannot be anything 
other than a tuple of three ints, and the second a list of three ints. It 
seems to me that an implementation might provide a single byte-code to 
build list literals, perhaps even LOAD_CONST itself. The byte-codes used 
by the Python VM are not part of the language definition, and are subject 
to change without warning.

And in fact, if we go all the way back to Python 1.5, even tuple literals 
weren't handled by a single byte-code, they were assembled at runtime 
like lists still are:

[steve@ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:18)  [GCC 4.1.2 20080704 (Red Hat 
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
 from dis import dis
 dis(compile(x = (1, 2, 3), '', 'exec'))
  0 SET_LINENO  0

  3 SET_LINENO  1
  6 LOAD_CONST  0 (1)
  9 LOAD_CONST  1 (2)
 12 LOAD_CONST  2 (3)
 15 BUILD_TUPLE 3
 18 STORE_NAME  0 (x)
 21 LOAD_CONST  3 (None)
 24 RETURN_VALUE




-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Dave Angel
On 2/10/2013 21:24, Steven D'Aprano wrote:

 On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:

 CPython core developers have be very conservative about what
 tranformations they put into the compiler. (1,2,3) can always be
 compiled as a constant, and so it is. [1,2,3] might or might not be a
 constant, depending on the context, and no attempt is made to analyze
 that.

 The first sentence of this is correct. The next two don't quite make 
 sense to me, since I don't understand what you mean by constant in this 
 context. I *think* you might be referring to the LOAD_CONST byte-code, 
 which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So 
 a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST 
 call:

 py from dis import dis
 py dis(compile(x = (1, 2, 3), '', 'exec'))
   1   0 LOAD_CONST   4 ((1, 2, 3))
   3 STORE_NAME   0 (x)
   6 LOAD_CONST   3 (None)
   9 RETURN_VALUE


 while a literal [1, 2, 3] does not:



The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const.  (Much like the interning of small
integers)  The list, however, would always have to be copied from the
compile-time object.  So that object itself would be a phantom, used
only as the template with which the list is to be made.


-- 
DaveA


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread MRAB

On 03/10/2013 02:39, Dave Angel wrote:

On 2/10/2013 21:24, Steven D'Aprano wrote:


On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:


CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.


The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by constant in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:

py from dis import dis
py dis(compile(x = (1, 2, 3), '', 'exec'))
  1   0 LOAD_CONST   4 ((1, 2, 3))
  3 STORE_NAME   0 (x)
  6 LOAD_CONST   3 (None)
  9 RETURN_VALUE


while a literal [1, 2, 3] does not:




The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const.  (Much like the interning of small
integers)  The list, however, would always have to be copied from the
compile-time object.  So that object itself would be a phantom, used
only as the template with which the list is to be made.


The key point here is that the tuple is immutable, including its items.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Steven D'Aprano
On Thu, 03 Oct 2013 02:46:53 +0100, MRAB wrote:

 On 03/10/2013 02:39, Dave Angel wrote:
 On 2/10/2013 21:24, Steven D'Aprano wrote:

 On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:

 CPython core developers have be very conservative about what
 tranformations they put into the compiler. (1,2,3) can always be
 compiled as a constant, and so it is. [1,2,3] might or might not be a
 constant, depending on the context, and no attempt is made to analyze
 that.

 The first sentence of this is correct. The next two don't quite make
 sense to me, since I don't understand what you mean by constant in
 this context. I *think* you might be referring to the LOAD_CONST
 byte-code, which in Python 3.3 understands tuples like (1, 2, 3), but
 not lists. So a literal (1, 2, 3) gets created at compile-time with a
 single LOAD_CONST call:

 py from dis import dis
 py dis(compile(x = (1, 2, 3), '', 'exec'))
   1   0 LOAD_CONST   4 ((1, 2, 3))
   3 STORE_NAME   0 (x)
   6 LOAD_CONST   3 (None)
   9 RETURN_VALUE


 while a literal [1, 2, 3] does not:



 The difference is that a tuple can be reused, so it makes sense for the
 comiler to produce it as a const.  (Much like the interning of small
 integers)  The list, however, would always have to be copied from the
 compile-time object.  So that object itself would be a phantom, used
 only as the template with which the list is to be made.

 The key point here is that the tuple is immutable, including its items.

You are both assuming that LOAD_CONST will re-use the same tuple 
(1, 2, 3) in multiple places. But that's not the case, as a simple test 
will show you:


# Python 3.3

py def f():
... a = (1, 2, 3)
... b = (1, 2, 3)
... return a is b
...
py f()  # Are the tuples the same object?
False
py from dis import dis
py dis(f)
  2   0 LOAD_CONST   4 ((1, 2, 3))
  3 STORE_FAST   0 (a)

  3   6 LOAD_CONST   5 ((1, 2, 3))
  9 STORE_FAST   1 (b)

  4  12 LOAD_FAST0 (a)
 15 LOAD_FAST1 (b)
 18 COMPARE_OP   8 (is)
 21 RETURN_VALUE



So even though both a and b are created by the same LOAD_CONST byte-code, 
the object is not re-used (although it could be!) and two distinct tuples 
are created.

Re-use of objects (caching or interning) is an implementation 
optimization, not a language feature. An implementation might choose to 
cache ints, tuples, strings, floats, or none of them at all. That in no 
way affects whether the LOAD_CONST byte-code is used.

In fact, LOAD_CONST may behave differently inside functions than outside: 
in CPython, functions will cache some literals in the function attribute 
__code__.co_consts, and LOAD_CONST may use that, while outside of a 
function, only small ints and identifier-like strings are cached but very 
little else. Other implementations may do differently -- I'm not sure 
whether __code__ is a public language feature or an implementation 
feature, but what goes into co_consts certainly isn't fixed. IronPython 
2.6 doesn't appear to put anything in co_consts except None.

And of course, *all of this is subject to change*, since it is not 
language semantics but implementation details. If HyperPython8.5 
aggressively caches every tuple it can, while SimplePython1.1 uses 
BUILD_TUPLE rather than LOAD_CONST, both are still perfectly compliant 
Python implementations.

(HyperPython probably will require a huge amount of memory, and 
SimplePython will probably be slow, but those are quality of 
implementation issues.)


Aside: a sufficiently smart optimizing compiler could optimize function f 
above to either 

LOAD_CONST(True)
RETURN_VALUE

or

LOAD_CONST(False)
RETURN_VALUE


and still be a correct Python implementation. Since Python the language 
doesn't specify when, if ever, objects should be cached, it could even 
randomly choose between those two options at compile time, or even at 
runtime, and still be correct.


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Terry Reedy

On 10/2/2013 9:46 PM, MRAB wrote:

On 03/10/2013 02:39, Dave Angel wrote:

On 2/10/2013 21:24, Steven D'Aprano wrote:


On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:


CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.


To be specific: in

for i in [1,2,3]: print i

it looks like it might be save to pre-compile the list and just use it 
when the code is run. But if the above is in a function object whose 
code object is ever introspected, the list object could be accessed and 
mutated. Letting the compiler know that it can do the optimization is 
one reason to use tuples in situations like the above.



The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by constant in
this context.

 I *think* you might be referring to the LOAD_CONST byte-code,

I am referring to constant-value objects included in the code object.
 def f(): return (1,2,3)

 f.__code__.co_consts
(None, 1, 2, 3, (1, 2, 3))

None is present as the default return, even if not needed for a 
particular function. Every literal is also tossed in, whether needed or not.



which in Python 3.3 understands tuples like (1, 2, 3), but not lists.


The byte-code does not understand anything about types. LOAD_CONST n 
simply loads the (n+1)st object in .co_consts onto the top of the stack.



S9 a literal (1, 2, 3) gets created at compile-time with a single
LOAD_CONST call:

py from dis import dis
py dis(compile(x = (1, 2, 3), '', 'exec'))
  1   0 LOAD_CONST   4 ((1, 2, 3))
  3 STORE_NAME   0 (x)
  6 LOAD_CONST   3 (None)
  9 RETURN_VALUE


while a literal [1, 2, 3] does not:



The difference is that a tuple can be reused, so it makes sense for the
comiler to produce it as a const.  (Much like the interning of small
integers)  The list, however, would always have to be copied from the
compile-time object.  So that object itself would be a phantom, used
only as the template with which the list is to be made.


The key point here is that the tuple is immutable, including its items.


The items of the tuple I gave as an examples are all constant. If they 
were not, the tuple would not be a constant for the purpose of 
compile-time creation.




--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Terry Reedy

On 10/2/2013 9:24 PM, Steven D'Aprano wrote:

On Wed, 02 Oct 2013 18:17:06 -0400, Terry Reedy wrote:


CPython core developers have be very conservative about what
tranformations they put into the compiler. (1,2,3) can always be
compiled as a constant, and so it is. [1,2,3] might or might not be a
constant, depending on the context, and no attempt is made to analyze
that.


The first sentence of this is correct. The next two don't quite make
sense to me, since I don't understand what you mean by constant in this
context. I *think* you might be referring to the LOAD_CONST byte-code,
which in Python 3.3 understands tuples like (1, 2, 3), but not lists. So
a literal (1, 2, 3) gets created at compile-time with a single LOAD_CONST
call:


Answered in another response.


py from dis import dis
py dis(compile(x = (1, 2, 3), '', 'exec'))
   1   0 LOAD_CONST   4 ((1, 2, 3))
   3 STORE_NAME   0 (x)
   6 LOAD_CONST   3 (None)
   9 RETURN_VALUE


while a literal [1, 2, 3] does not:


py dis(compile(x = [1, 2, 3], '', 'exec'))
   1   0 LOAD_CONST   0 (1)
   3 LOAD_CONST   1 (2)
   6 LOAD_CONST   2 (3)
   9 BUILD_LIST   3
  12 STORE_NAME   0 (x)
  15 LOAD_CONST   3 (None)
  18 RETURN_VALUE


But I don't think this is a necessary language limitation. Both (1, 2, 3)
and [1, 2, 3] are known at compile time: the first cannot be anything
other than a tuple of three ints, and the second a list of three ints.


Given introspectable code objects, the list must be built as runtime 
from the three ints to guarantee that.



seems to me that an implementation might provide a single byte-code to
build list literals, perhaps even LOAD_CONST itself.


There are list displays, but not list literals. The distinction is 
important. The BUILD_LIST byte code is used above.


LOAD_CONST 4 (1,2,3)
BUILD_LIST_FROM_TUPLE_CONSTANT

would be possible for the special case but hardly worthwhile.

 The byte-codes used by the Python VM are not part of the language 
definition,


which is why I specified CPython as the context, with 'current' as the 
default.



and are subject to change without warning.

And in fact, if we go all the way back to Python 1.5, even tuple literals
weren't handled by a single byte-code, they were assembled at runtime
like lists still are:

[steve@ando ~]$ python1.5
Python 1.5.2 (#1, Aug 27 2012, 09:09:18)  [GCC 4.1.2 20080704 (Red Hat
4.1.2-52)] on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam

from dis import dis
dis(compile(x = (1, 2, 3), '', 'exec'))

   0 SET_LINENO  0

   3 SET_LINENO  1
   6 LOAD_CONST  0 (1)
   9 LOAD_CONST  1 (2)
  12 LOAD_CONST  2 (3)
  15 BUILD_TUPLE 3
  18 STORE_NAME  0 (x)
  21 LOAD_CONST  3 (None)
  24 RETURN_VALUE


Extending pre-complilation to tuples with nested constant tuples is even 
more recent. I 3.3.2, we have


 f.__code__.co_consts
(None, 1, 2, 3, (1, 2, 3))
 def f(): return ((1,2,3), (4,5))

 f.__code__.co_consts
(None, 1, 2, 3, 4, 5, (1, 2, 3), (4, 5), ((1, 2, 3), (4, 5)))

but I am sure if you go back you can find versions that lack the last item.

--
The language is as conservative about mandating optimizations as the 
implementation is about doing them. I consider making None, False, True 
be un-rebindable keynames to be an optimization. This is not even for 
the other singletons Ellipsis and NotImplemented. I cannot think of too 
much else. Tuple constant optimization is not mandated. It would be as 
out of character for the language to require tail-recursion optimization 
as for CPython to do it.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-02 Thread Chris Angelico
On Thu, Oct 3, 2013 at 12:34 PM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
 py def f():
 ... a = (1, 2, 3)
 ... b = (1, 2, 3)
 ... return a is b
 ...
 py f()  # Are the tuples the same object?
 False

That just means the compiler doesn't detect reuse of the same tuple.
But compare:

 def f():
return (1,2,3)

 f() is f()
True

Every time the function's called, it returns the same tuple (which
obviously can't be done with lists). And of course if that would be
dangerous, it's not done:

 def f():
return (1,[2],3)

 f()[1].append(Hello)
 f()
(1, [2], 3)
 import dis
 dis.dis(f)
  2   0 LOAD_CONST   1 (1)
  3 LOAD_CONST   2 (2)
  6 BUILD_LIST   1
  9 LOAD_CONST   3 (3)
 12 BUILD_TUPLE  3
 15 RETURN_VALUE

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Tail recursion to while iteration in 2 easy steps

2013-10-01 Thread Terry Reedy
Part of the reason that Python does not do tail call optimization is 
that turning tail recursion into while iteration is almost trivial, once 
you know the secret of the two easy steps. Here it is.


Assume that you have already done the work of turning a body recursive 
('not tail recursive') form like


def fact(n): return 1 if n = 1 else n * fact(n-1)

into a tail recursion like

def fact(n, _fac=1):
  '''Return factorial for any count n.

  Users are not expected to override private parameter _fac.
  '''
  if n = 1:
return _fac
  else:  # optional
return fact(n-1, n * _fac)

(This conversion nearly requires adding an accumulator parameter, as 
done here.


Turn this into while iteration with two easy steps.

1. If not already done, make if-else a statement, rather than an 
expression, with the recursive call in the if branch. If nothing else, 
just use 'not condition' to invert the condition.


def fact(n, _fac=1):
  if n  1:  # not n = 1
return fact(n-1, n * _fac)
  else:  # optional
return _fac

While contrary to what is often published, this order makes logical 
sense. The recursive call means 'go to the top of this function with new 
bindings for the parameters'. So put it closest to the top. The base 
case means 'we are done, time to leave'. So put it at the bottom.


2. This order also makes the follow substeps work:
2a. Replace 'if' with 'while'.
2b. Replace the recursive call with multiple assignment, using the 
parameters as targets and the arguments as source.

For our example:

def fact(n, _fac=1):
  while n  1:
n, _fac = n-1, n * _fac
  else:
return _fac

The proof of correctness for this conversion might argue that the 
recursive form is equivalent to the following pseudo-Python:


def fact(n, _fac=1):
  label top
  if n  1:
n, _fac = n-1, n * _fac
goto top
  else:
return _fac

and that this is equivalent to the real Python with while.

At this point, you call pull the initialization of the private parameter 
into the body, remove the else, and even add a value check.


def fact(n):
  if n  0 or n != int(n):
raise ValueError('fact input {} is not a count'.format(n))
  fac = 1
  while n  1:
n, fac = n-1, n * fac
  return fac

With care, the multiple-assignment statement can usually be turned into 
multiple assignment statements for greater efficiency.


  fac *= n
  n -= 1

But note that the reverse order would be a bug. So I would not do this, 
especially in more complicated situations, without having tests first.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Tail recursion to while iteration in 2 easy steps

2013-10-01 Thread rusi
On Wednesday, October 2, 2013 3:00:41 AM UTC+5:30, Terry Reedy wrote:
 Part of the reason that Python does not do tail call optimization is 
 that turning tail recursion into while iteration is almost trivial, once 
 you know the secret of the two easy steps. Here it is.

What happens for mutual tail recursive code like this

def isodd(x) : return False if x==0 else iseven(x-1)
def iseven(x): return True if x==0 else isodd(x-1)


Notes:
1. I am not advocating writing odd/even like the above -- just giving a trivial 
example
2. General mutual structural (what you call 'body') recursion is more general 
than mutual tail recursion is more general than single tail recursion.
3. IOW tail recursion optimization is intermediate between the optimization you 
are showing and the maximally pessimistic implementation -- stack, activation 
record etc -- of programming languages
4. There is a whole spectrum of such optimizaitons -- 
4a eg a single-call structural recursion example, does not need to push return 
address on the stack. It only needs to store the recursion depth: 

If zero jump to outside return add; if  0 jump to internal return address

4b An example like quicksort in which one call is a tail call can be optimized 
with your optimization and the other, inner one with 4a above

5. Programming language implementations dont do optimizations like TR because 
these analyses are in themselves hard but because inter-procedural analysis per 
se is a headache.  For python-like languages its a much bigger headache because 
the recursive name must be dynamically fetched for every call

6. [Personal pov] TR optimization is not really a big beef for me:
As you can see, it does not figure in the All things every programmer should 
learn from FP list of mine
http://blog.languager.org/2012/10/functional-programming-lost-booty.html

7. Pedagogically, understanding general recursion is far more important than 
exploiting tail recursion
-- 
https://mail.python.org/mailman/listinfo/python-list