On 21.06.2011 2:22, Nick Morgan wrote:
Hi all
I've been messing about with streams in JavaScript, trying to
implement some of the code from section 3.5
(http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-24.html#%_sec_3.5)
of SICP.
The book which I recommend to read for _every_ programmer. This is the
book and the style of explanation which all we should forward to.
Here's a fiddle showing the Sieve of Eratosthenes, displaying each
prime as it is found. http://jsfiddle.net/skilldrick/vT2mC/1/
For the uninitiated, a stream is like a list where only the first
element is calculated. The rest of the list is held as a "promise" to
compute it later. The way I've implemented streams in JS is as an
array, where the first element is the value, and the second element is
a function that when called will return an array where the first
element is the next value, and so on.
As I mentioned on Twitter, yep, this is an interesting stuff.
There are two problems with implementing streams in JavaScript. The
first is that there's no way (as far as I know) to abstract away the
syntax in making a stream - so a function to make a stream looks like
this:
function integersFrom(start) {
return [start, function () {
return integersFrom(start + 1);
}];
}
whereas in Scheme it would be simply
(define (integersFrom start)
(cons-stream start (integersFrom (+ start 1))))
Yeah, from the learning viewpoint it's an interesting approach. However,
you should know that JS has generators for that. One of the generators
purposes is exactly "infinite" streams. For example (test in Firefox
with JS1.7):
function fibonacci() {
let [prev, curr] = [0, 1];
while (true) {
[prev, curr] = [curr, prev + curr];
yield curr;
}
}
Having such a generator function (the one which contains `yield` keyword
inside its body) we can enter its body many times starting from the
previously interrupted point. This is the common concept of a coroutine
(that is, the function which has many entry points in contrast with a
casual function which is known as `subroutine`).
We can get the next value of the stream either manually (calling `next`
method on the generator object):
let seq = fibonacci();
console.log(seq.next()); // 1
console.log(seq.next()); // 2
console.log(seq.next()); // 3
console.log(seq.next()); // 5
console.log(seq.next()); // 8
Or (which leads to complex iterators/enumerators by our objects) pass it
to the `for-in` loop which at each iteration calls the same `next`
method of the passed generator behind the scene:
for (let n in fibonacci()) {
// truncate the sequence at 1000
if (n > 1000)
break;
console.log(n);
}
The main difference from you learning example is of course efficiency --
the generator object reuses execution context keeping the state (that is
all local vars and parameters are saved between the calls).
The second issue is that JS doesn't have tail-call optimisation, so
working with long streams overflows the stack. If you look at the
foreachStream function you'll see I'm doing a setTimeout each time I
get a new value from the stream, but that then means the only way to
return values is to go async.
Yes, tail calls are planned for the next version of ES by the standard,
so (theoretically) all implementation will have to implement them.
Dmitry.
--
To view archived discussions from the original JSMentors Mailman list:
http://www.mail-archive.com/[email protected]/
To search via a non-Google archive, visit here:
http://www.mail-archive.com/[email protected]/
To unsubscribe from this group, send email to
[email protected]