Obviously, yes, since in case of a big/huge source you don't have to tokenize all the source if you have a parse error right after first token. So the lexer should be "piped" to the parser and provides tokens on demand. OTOH, in academical educational implementations like this one, you may choose any way.

P.S.: I can see in forks of the project other very interesting solutions, e.g. with using generators (`yield`) to handle scanning process.

Dmitry.

On 13.08.2011 16:58, Jarek Foksa wrote:
Would it make any difference if I had tokenized all data upfront and
then I would just pass an array of all tokens to parser? Could this
approach be slower or faster than yours?

On Fri, Aug 12, 2011 at 9:10 PM, Dmitry A. Soshnikov
<[email protected]>  wrote:
https://github.com/DmitrySoshnikov/Essentials-of-interpretation/blob/master/src/lesson-3.js

--
To view archived discussions from the original JSMentors Mailman list: 
http://www.mail-archive.com/[email protected]/

To search via a non-Google archive, visit here: 
http://www.mail-archive.com/[email protected]/

To unsubscribe from this group, send email to
[email protected]

Reply via email to