On 1/30/13, David A. Wheeler <dwhee...@dwheeler.com> wrote:
> I said:
>> > Found and fixed.  The BNF action rule for "head SUBLIST rhs" used the
>> > monify
>> > function, which was completely unnecessary (I think this was a
>> > cut-and-paste
>> > from elsewhere and I didn't remove the monify).
>
> Alan Manuel Gloria:
>> That's the first thing I thought when I saw your bug report ^^.
>
> Sigh.  I'm only human, a fact I try to demonstrate daily :-).

Don't worry, I designed SUBLIST, so it's just expected that I'd figure
out implementation bugs of SUBLIST easily.  The bug has to do with the
tricky "a $ b" case, which is exactly the same as "a b", which many
people might find surprising.  The rationale for that is to provide
consistent formatting for constructs like cond, especially when
combining branches composed of complex computations with branches that
have a simple variable reference or constant.

>
> So I'm trying to use a mixture of approaches to make the
> final SRFI spec and implementation really high-quality:
> 1. ANTLR grammar checks (so grammar's more likely to be right)
> 2. Two implementations of the new spec (so grammar is widely implementable)
> 3. Big automated test suite, including checks with the old version results
>     (so that the actual interpretation is what was intended, and that the
>      implementation is more likely to be right).
> 4. Peer review.  That'd be you guys :-).
>
>> Looks good.  My approach is still getting modded several times in my
>> head.  I think my approach will allow us to use a simple parser
>> combinators sublibrary (but really requires SAME due to the branch
>> after head).  I'm a bit busy IRL; I'll try to hack together something
>> using my alternative approach this weekend, but no promises.
>
> As an experiment, that approach sounds interesting, but I really do *NOT*
> want to use that approach for either the SRFI spec or the SRFI
> implementation.
> As I mentioned before, such a spec won't have the additional ANTLR
> grammar checks (unless you implement it in ANTLR).

Well, currently I'm planning on hacking together a parser-combinator
library and essentially converting from ANTLR syntax to
Scheme+parser-combinator syntax.  I'll have to change the tokenizer
around quite a bit in order to emit tokens that (part of) the ANTLR
spec will accept, but that's doable.  So I'd argue that, if I get it
working, it'll be even closer to the ANTLR spec., since it will *be*
the ANTLR spec, except in a Schemely syntax.  And a tokenizer.

^^,

> Also, I want to ensure that the shown-implementation has properties like
> (1) it doesn't depend on advanced Scheme capabilities (so it can port to
> not-quite-Schemes and other Lisps) and

Well, the only advanced Scheme capability it actually uses is lexical
scoping and anonymous functions.  That rules out elisp, admittedly.

I did mention using call/cc, but only as an even more theoretical 3rd
approach, which I am currently not pursuing.  If I get time, maybe I
will, as I suspect it will make the tokenizing structure even clearer.

> (2) it closely matches the spec.
> Also, the stronger separation of the pieces, while making each part simpler,
> will
> hide from humans how they combine, the very issue I want to make
> crystal-clear.

Hmm, granted.  I still think separating them is better because of the
conceptual simplicity, and the only combination involved is one
calling the other.  Basically all I'm doing is applying the "message
passing == function call" insight of the Lambda Teh Lutimate.

Sincerely,
AmkG

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
_______________________________________________
Readable-discuss mailing list
Readable-discuss@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/readable-discuss

Reply via email to