TCO would indeed be nice to have, but there are some cool J tricks as
alternatives to recursion. Namely / and ^:
factorial has the obvious
*/ >: i. 6
but also,
(>:@:{.([ , *) {:)^:5 ] 1 1
6 720
Though the latter isn't exactly recursive, it happens to use the same "format"
that tail call optimizations use. The latter is almost as fast, but uses much
less space.
timespacex '*/ >: i. 155x'
0.0001504 52864
timespacex '(>:@:{.([ , *) {:)^:(155) 1 1x'
0.00028544 7552
by using a boxed power, you can get the sequence.
Fibonacci, has a couple of power versions
( , +/@:(_2&{.))(^:5) 1 1x
1 1 2 3 5 8 13
({: , +/)^:(<5) 1 1x
1 1
1 2
2 3
3 5
5 8
The latter is "TCO"d in the sense that it focuses on simply generating the next
item rather than build a list, and so is much faster to let ^:(<n) generate the
sequence if you want that rather than the last item.
timespacex ' ({: , +/)(^:545) 1 1x'
0.00858721 178048
timespacex ' ({: , +/)^:(<545) 1 1x'
0.0007184 6784
timespacex ' ({: , +/)^:(<545) 1 1x'
0.00070976 271616
The latter approach for fibonacci, has the interesting similar:
({. ,~ -~/) 8 13x
5 8
For catalan numbers, there is a direct formula
(>: %~ (! 2&*)) 5x
42
but also the "recursive":
(>:@:{. , (2 + {.) %~ {: * 2 + 4 * {.)^:(4) 1 1x
5 42
timespacex '(>: %~ (! 2&*)) 135x'
9.66401e_5 196608
timespacex '(>: %~ (! 2&*)) >: i.135x'
0.00674016 323200
timespacex '(>:@:{. , (2 + {.) %~ {: * 2 + 4 * {.)^:(134) 1 1x'
0.00055264 7296
timespacex '(>:@:{. , (2 + {.) %~ {: * 2 + 4 * {.)^:(<134) 1 1x'
0.00054976 80000
The TCO iterative approach is just as fast gathering the whole sequence as it
is just the last item. It is faster than the direct approach at building a
whole sequence, and uses less space than direct approach when taking one item.
I doubt I thought you anything. TCO on $: would be nice for sure. ^: usually
provides elegant enough alternatives.
----- Original Message -----
From: Dan Bron <[email protected]>
To: [email protected]
Cc:
Sent: Monday, February 23, 2015 4:04 PM
Subject: Re: [Jprogramming] Can whatever be written tacitly?
Raul wrote:
> I would still advise care, when using $: (or even ^:_), because
> sometimes you'll need to shut down J (or sometimes even reboot your
> machine) to deal with the resulting resource consumption.
Agreed; and it's worth quoting this just to say "I agree!" because it's
worth people reading this warning more than once.
> In the $: cases, you can only have tail call elimination when the
> result of $: is the result of the verb.
Well yeah, that's what "tail call" means in "tail call elimination"!
For what it's worth, the specifications of Scheme literally *require*
implementations to provide TCO [1]. Why? I won't presume to speculate,
other than to note that Scheme was developed after years of experience
with and research in similar languages which did not enforce this
stricture.
Don't know whether you'd consider the Scheme standards committee to be
members of the "problem-loving sets", but a number of them appear to be
fairly-well credentialled guys [2] (though the drawback of having such
credentials is they sometimes prompt people to wave their hands and
dismiss you as "just an academic").
But even if you feel Scheme went too far in mandating TCO in its
specification, it's worth noting that *many* other languages --
particularly in the "functional programming" arena which J either inhabits
or abuts (depending on who you ask) -- implement TCO.
-Dan
[1] Scheme specifications, Chapter 5, Section 11:
http://www.r6rs.org/final/html/r6rs/r6rs-Z-H-8.html#node_sec_5.11
" A Scheme implementation is properly tail-recursive if it supports an
unbounded number of active tail calls."
For identifying tail calls (of the kind where "the result of $: is the
result of the verb"), see Chapter 11, Section 2:
http://www.r6rs.org/final/html/r6rs/r6rs-Z-H-14.html#node_sec_11.20 (though
of course to make use of these ideas, we'd have to do some work to
translate the concepts of "tail context" in Scheme to analogous ones in J.)
[2] Some authors and editors of [1]:
R. KENT DYBVIG is a professor of Computer Science at Indiana University in
Bloomington, Indiana.
For his contributions to both the practical and theoretical aspects of
computing and information technology, in particular his design and
development of Chez Scheme, the ACM named Dybvig a Distinguished Engineer
in 2006, the first year the association awarded distinguished ranks.[1]
MATTHEW FLATT is an American computer scientist, currently a professor of
computer science at the University of Utah in Salt Lake City. He is also a
member of the core development team for the Racket programming language.[2]
Flatt received his PhD at Rice University in 1999, under the direction of
Matthias Felleisen.[3] His
dissertation is on the mechanics of first-class modules and mixin classes.
WILLIAM D. CLINGER is an Associate Professor in the College of Computer and
Information Science at Northeastern University. Clinger is known for his
work on higher-order and functional programming languages.
Clinger obtained his PhD from the Massachusetts Institute of Technology
(MIT) under the supervision of Carl Hewitt. His doctoral research revolved
around defining a denotational semantics for the Actor model of concurrent
computation,
JONATHAN REES is a software architect at the National Evolutionary
Synthesis Center, Duke University and a member W3C Technical Architecture
Group (persistence, web semantics). He has written on Evolution of formal
languages, among a number of other interesting topics.
ROBERT BRUCE FINDLER is an American computer scientist, currently an
associate professor of electrical engineering and computer science at
Northwestern University. Findler received his PhD at Rice University under
the direction of Matthias Felleisen.
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm