This is most probably my last post on this subject. As you may know Joey Hurst 
has already done the JS version of the runtime so there's no need for me to try 
to do the same.

However, i wanted to just capture my observations on this Class. If the 
observations seem correct, then you may want to look at how I addressed these 
"problems" in a different (and in my opinion simplified) version of this Class.

Issue 1. the insertion of "insertOps" seems to be FILO versus the intended? 
FIFO. Mentioned in another thread but not confirmed.
Issue 2. ReplaceOps before the "start" do not take effect. I mentioned this in 
another thread.
Issue 3. The Inserts beyond the last token in the stream are injected anyway. 
This is from the following piece of code and the test case 
"testInsertAfterLastIndex" verifies it. If this observation is true and the 
feature is intended, one can assume that it should also be valid when an "end" 
is supplied to toString(start,end); however, this is not the case:

        // now see if there are operations (append) beyond last token index
        for (int opi=rewriteOpIndex; opi<rewrites.size(); opi++) {
            RewriteOperation op =
                    (RewriteOperation)rewrites.get(opi);
            if ( op.index>=size() ) {
                op.execute(buf); // must be insertions if after last token
            }
            //System.out.println("execute "+op+" at "+opi);
            //op.execute(buf); // must be insertions if after last token
        }

Issue 4. If the assumption in Issue 3 is valid, again one can assume, by 
analogy, that the same should apply to insertions before "start". This is not 
the case.

Assuming that the above are indications of issues within TokenRewriteStream, 
the following show how they can be overcome:

1. The insertions all follow a simple logic: At an index, all insertions are 
recorded and executed in the order they were received. A simple List, Stack, 
Array/Vector can provide this implementation easily.
2. At an index, Replace operations, including Deletes, overwrite each other. 
Therefore, all I would need to maintain is one single ReplaceOp. 
3. If i then create a structure that has insertions grouped together along with 
one single ReplaceOp I could handle the Rewrite operations for that index. A 
simple grouping (Class) of an array (for insertions) and a single op (for the 
replace) could meet my requirements for a specific index.
4. All i need now is to ensure that this structure can easily be inserted and 
retrieved for a given index. This implies a Map of some sort, in which the key 
is the "index" and the value is the structure mentioned above.
5. Based on the above my "rewrites" would become a Map. Here's how I 
implemented it as a SortedMap (using TreeMap). Sorted is for looping that we'll 
see later. I have provided helper methods in my structure just to encapsulate 
the logic further. But i avoided over-encapsulation so the code can still be 
examined easily:

    static class OpsAtIndexEntry {
        public int index;
        public Vector<RewriteOperation> inserts;
        public ReplaceOp replace;
        
        public OpsAtIndexEntry(int index){
            this.index = index;
            this.inserts = new Vector<RewriteOperation>();
        }
        
        public void addInsertOp(RewriteOperation insertOp) {
            // inserts is a FIFO list so just add to the end
            if(this.index == insertOp.index) this.inserts.add(insertOp);
        
        }
        public void addReplaceOp(ReplaceOp replaceOp) {
            // replace operations (replace or delete) just overwrite the 
previous one
            if(this.index == replaceOp.index) this.replace = replaceOp;
        }
        public void addOperation(RewriteOperation op) {
            if(op instanceof ReplaceOp) this.addReplaceOp((ReplaceOp)op);
            else this.addInsertOp(op);
        }
    }

and then the method "addToSortedRewriteList" would turn into:

    protected void addToSortedRewriteList(String programName, RewriteOperation 
op) {
        TreeMap<Integer, OpsAtIndexEntry> rewrites = getProgram(programName);
        OpsAtIndexEntry entry = rewrites.get(op.index);
        if(entry == null) {
            entry = new OpsAtIndexEntry(op.index);
            rewrites.put(op.index, entry);
        } 
        entry.addOperation(op);
    }

6. Now it comes to executing these operations into the output. This is in the 
toString() method. Assuming that my issues 2 through 4 are features that are 
needed: that is I need to be able to execute operations beyond the start and 
end, this method then would have to perform three tasks. I call these segments:

    Segment 1: Pre-start -- This segment executes all insertions that fall 
before the "start". ReplaceOps are ignored unless they have a "lastIndex" that 
is beyond "start". In that case, the replacement is done and the loop is ended. 
This is to address Issues 2 and 4:

        subMap = rewrites.headMap(start);
        for (OpsAtIndexEntry entry : 
(Collection<OpsAtIndexEntry>)subMap.values()) {
            // output all inserts regardless
            for (int i=0; i<entry.inserts.size();i++) {
                entry.inserts.get(i).execute(buf);
            }
            if(entry.replace != null                            // if we do 
have a replace operation
                    && entry.replace.lastIndex >= start)     // and start falls 
in the replace range
            {
                start = entry.replace.execute(buf);
                break;
                
            }            
        }

    Segment 2: The main segment. This is where we loop between start and end. 
Do our insertions. Replaces cause the counter to shift. If there is no Replace 
in a cycle, the token is inserted. This is pretty much the business as usual. 
No surprises here:

        int tokenCursor=start;
        for (; tokenCursor <=end; )
        {
            OpsAtIndexEntry entry = (OpsAtIndexEntry)rewrites.get(tokenCursor);
            if (entry != null) {
                for (int i = 0; i < entry.inserts.size(); i++) {
                    entry.inserts.get(i).execute(buf);
                }
                if(entry.replace != null) {
                    tokenCursor = entry.replace.execute(buf);
                    continue;
                }
            }

            // dump the token at this index
            buf.append(this.get(tokenCursor).getText());
            tokenCursor++;
        }

    Segment 3: The Post-end Segment: This kind of corresponds to Issue 3:

        subMap = rewrites.tailMap(tokenCursor);
        for (OpsAtIndexEntry entry : 
(Collection<OpsAtIndexEntry>)subMap.values()) {
            // output all inserts regardless
            for (int i=0; i<entry.inserts.size();i++) {
                entry.inserts.get(i).execute(buf);
            }
            // replace operations have no effect in this segment    
        }


That's it. And a few minor changes here and there to change the rewrites from 
the existing List to the TreeMap.
It is very possible that we could get away without creating the SortedMap and 
just used a normal Map. This is if toString() is called once and not 
repeatedly. In this case, we could get a sorted list of the Map's keys just 
before starting the loops.

I added a few more test cases to the unit test file as well (see the end of the 
file) and a few tweaks to account for Issue1.

I hope i haven't wasted anybody's time. 
Regards,
Jeff


/*
 [The "BSD licence"]
 Copyright (c) 2005-2006 Terence Parr
 All rights reserved.

 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions
 are met:
 1. Redistributions of source code must retain the above copyright
    notice, this list of conditions and the following disclaimer.
 2. Redistributions in binary form must reproduce the above copyright
    notice, this list of conditions and the following disclaimer in the
    documentation and/or other materials provided with the distribution.
 3. The name of the author may not be used to endorse or promote products
    derived from this software without specific prior written permission.

 THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
 IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
 OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
 IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
 INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
 NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
 THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.runtime;

import java.util.*;

/** Useful for dumping out the input stream after doing some
 *  augmentation or other manipulations.
 *
 *  You can insert stuff, replace, and delete chunks.  Note that the
 *  operations are done lazily--only if you convert the buffer to a
 *  String.  This is very efficient because you are not moving data around
 *  all the time.  As the buffer of tokens is converted to strings, the
 *  toString() method(s) check to see if there is an operation at the
 *  current index.  If so, the operation is done and then normal String
 *  rendering continues on the buffer.  This is like having multiple Turing
 *  machine instruction streams (programs) operating on a single input tape. :)
 *
 *  Since the operations are done lazily at toString-time, operations do not
 *  screw up the token index values.  That is, an insert operation at token
 *  index i does not change the index values for tokens i+1..n-1.
 *
 *  Because operations never actually alter the buffer, you may always get
 *  the original token stream back without undoing anything.  Since
 *  the instructions are queued up, you can easily simulate transactions and
 *  roll back any changes if there is an error just by removing instructions.
 *  For example,
 *
 *   CharStream input = new ANTLRFileStream("input");
 *   TLexer lex = new TLexer(input);
 *   TokenRewriteStream tokens = new TokenRewriteStream(lex);
 *   T parser = new T(tokens);
 *   parser.startRule();
 *
 * 	 Then in the rules, you can execute
 *      Token t,u;
 *      ...
 *      input.insertAfter(t, "text to put after t");}
 * 		input.insertAfter(u, "text after u");}
 * 		System.out.println(tokens.toString());
 *
 *  Actually, you have to cast the 'input' to a TokenRewriteStream. :(
 *
 *  You can also have multiple "instruction streams" and get multiple
 *  rewrites from a single pass over the input.  Just name the instruction
 *  streams and use that name again when printing the buffer.  This could be
 *  useful for generating a C file and also its header file--all from the
 *  same buffer:
 *
 *      tokens.insertAfter("pass1", t, "text to put after t");}
 * 		tokens.insertAfter("pass2", u, "text after u");}
 * 		System.out.println(tokens.toString("pass1"));
 * 		System.out.println(tokens.toString("pass2"));
 *
 *  If you don't use named rewrite streams, a "default" stream is used as
 *  the first example shows.
 */
public class TokenRewriteStream extends CommonTokenStream {
	public static final String DEFAULT_PROGRAM_NAME = "default";
    public static final int PROGRAM_INIT_SIZE = 100;
	public static final int MIN_TOKEN_INDEX = 0;

	// Define the rewrite operation hierarchy

	static class RewriteOperation {
		protected int index;
		protected Object text;
		protected RewriteOperation(int index, Object text) {
			this.index = index;
			this.text = text;
		}
		/** Execute the rewrite operation by possibly adding to the buffer.
		 *  Return the index of the next token to operate on.
		 */
		public int execute(StringBuffer buf) {
			return index;
		}
		public String toString() {
			String opName = getClass().getName();
			int $index = opName.indexOf('$');
			opName = opName.substring($index+1, opName.length());
			return opName+"@"+index+'"'+text+'"';
		}
	}

	static class InsertBeforeOp extends RewriteOperation {
		public InsertBeforeOp(int index, Object text) {
			super(index,text);
		}
		public int execute(StringBuffer buf) {
			buf.append(text);
			return index;
		}
	}

	/** TODO: make insertAfters append after each other.
	static class InsertAfterOp extends InsertBeforeOp {
		public InsertAfterOp(int index, String text) {
			super(index,text);
		}
	}
	 */

	/** I'm going to try replacing range from x..y with (y-x)+1 ReplaceOp
	 *  instructions.
	 */
	static class ReplaceOp extends RewriteOperation {
		protected int lastIndex;
		public ReplaceOp(int from, int to, Object text) {
			super(from,text);
			lastIndex = to;
		}
		public int execute(StringBuffer buf) {
			if ( text!=null ) {
				buf.append(text);
			}
			return lastIndex+1;
		}
	}

	static class DeleteOp extends ReplaceOp {
		public DeleteOp(int from, int to) {
			super(from, to, null);
		}
	}
	
	/**
	 * JS:
	 */
	static class OpsAtIndexEntry {
		public int index;
		public Vector<RewriteOperation> inserts;
		public ReplaceOp replace;
		
		public OpsAtIndexEntry(int index){
			this.index = index;
			this.inserts = new Vector<RewriteOperation>();
		}
		
		public void addInsertOp(RewriteOperation insertOp) {
			// inserts is a FIFO list so just add to the end
			if(this.index == insertOp.index) this.inserts.add(insertOp);
		
		}
		public void addReplaceOp(ReplaceOp replaceOp) {
			// replace operations (replace or delete) just overwrite the previous one
			if(this.index == replaceOp.index) this.replace = replaceOp;
		}
		public void addOperation(RewriteOperation op) {
			if(op instanceof ReplaceOp) this.addReplaceOp((ReplaceOp)op);
			else this.addInsertOp(op);
		}
	}
	
	/** You may have multiple, named streams of rewrite operations.
	 *  I'm calling these things "programs."
	 *  Maps String (name) -> rewrite (List)
	 */
	protected Map programs = null;

	/** Map String (program name) -> Integer index */
	protected Map lastRewriteTokenIndexes = null;

	public TokenRewriteStream() {
		init();
	}

	protected void init() {
		programs = new HashMap();
//		programs.put(DEFAULT_PROGRAM_NAME, new ArrayList(PROGRAM_INIT_SIZE));
		lastRewriteTokenIndexes = new HashMap();
	}

	public TokenRewriteStream(TokenSource tokenSource) {
	    super(tokenSource);
		init();
	}

	public TokenRewriteStream(TokenSource tokenSource, int channel) {
		super(tokenSource, channel);
		init();
	}

	public void rollback(int instructionIndex) {
		rollback(DEFAULT_PROGRAM_NAME, instructionIndex);
	}

	/** Rollback the instruction stream for a program so that
	 *  the indicated instruction (via instructionIndex) is no
	 *  longer in the stream.  UNTESTED!
	 */
	public void rollback(String programName, int instructionIndex) {
		List is = (List)programs.get(programName);
		if ( is!=null ) {
			programs.put(programName, is.subList(MIN_TOKEN_INDEX,instructionIndex));
		}
	}

	public void deleteProgram() {
		deleteProgram(DEFAULT_PROGRAM_NAME);
	}

	/** Reset the program so that no instructions exist */
	public void deleteProgram(String programName) {
		rollback(programName, MIN_TOKEN_INDEX);
	}

	/** If op.index > lastRewriteTokenIndexes, just add to the end.
	 *  Otherwise, do linear */
	protected void addToSortedRewriteList(RewriteOperation op) {
		addToSortedRewriteList(DEFAULT_PROGRAM_NAME, op);
	}

	/** Add an instruction to the rewrite instruction list ordered by
	 *  the instruction number (use a binary search for efficiency).
	 *  The list is ordered so that toString() can be done efficiently.
	 *
	 *  When there are multiple instructions at the same index, the instructions
	 *  must be ordered to ensure proper behavior.  For example, a delete at
	 *  index i must kill any replace operation at i.  Insert-before operations
	 *  must come before any replace / delete instructions.  If there are
	 *  multiple insert instructions for a single index, they are done in
	 *  reverse insertion order so that "insert foo" then "insert bar" yields
	 *  "foobar" in front rather than "barfoo".  This is convenient because
	 *  I can insert new InsertOp instructions at the index returned by
	 *  the binary search.  A ReplaceOp kills any previous replace op.  Since
	 *  delete is the same as replace with null text, i can check for
	 *  ReplaceOp and cover DeleteOp at same time. :)
	 */
	protected void addToSortedRewriteList(String programName, RewriteOperation op) {
		TreeMap<Integer, OpsAtIndexEntry> rewrites = getProgram(programName);
		OpsAtIndexEntry entry = rewrites.get(op.index);
		if(entry == null) {
			entry = new OpsAtIndexEntry(op.index);
			rewrites.put(op.index, entry);
		} 
		entry.addOperation(op);
	}

	public void insertAfter(Token t, Object text) {
		insertAfter(DEFAULT_PROGRAM_NAME, t, text);
	}

	public void insertAfter(int index, Object text) {
		insertAfter(DEFAULT_PROGRAM_NAME, index, text);
	}

	public void insertAfter(String programName, Token t, Object text) {
		insertAfter(programName,t.getTokenIndex(), text);
	}

	public void insertAfter(String programName, int index, Object text) {
		// to insert after, just insert before next index (even if past end)
		insertBefore(programName,index+1, text);
		//addToSortedRewriteList(programName, new InsertAfterOp(index,text));
	}

	public void insertBefore(Token t, Object text) {
		insertBefore(DEFAULT_PROGRAM_NAME, t, text);
	}

	public void insertBefore(int index, Object text) {
		insertBefore(DEFAULT_PROGRAM_NAME, index, text);
	}

	public void insertBefore(String programName, Token t, Object text) {
		insertBefore(programName, t.getTokenIndex(), text);
	}

	public void insertBefore(String programName, int index, Object text) {
		addToSortedRewriteList(programName, new InsertBeforeOp(index,text));
	}

	public void replace(int index, Object text) {
		replace(DEFAULT_PROGRAM_NAME, index, index, text);
	}

	public void replace(int from, int to, Object text) {
		replace(DEFAULT_PROGRAM_NAME, from, to, text);
	}

	public void replace(Token indexT, Object text) {
		replace(DEFAULT_PROGRAM_NAME, indexT, indexT, text);
	}

	public void replace(Token from, Token to, Object text) {
		replace(DEFAULT_PROGRAM_NAME, from, to, text);
	}

	public void replace(String programName, int from, int to, Object text) {
		if ( from > to || from<0 || to<0 ) {
			return;
		}
		addToSortedRewriteList(programName, new ReplaceOp(from, to, text));
		/*
		// replace from..to by deleting from..to-1 and then do a replace
		// on last index
		for (int i=from; i<to; i++) {
			addToSortedRewriteList(new DeleteOp(i,i));
		}
		addToSortedRewriteList(new ReplaceOp(to, to, text));
		*/
	}

	public void replace(String programName, Token from, Token to, Object text) {
		replace(programName,
				from.getTokenIndex(),
				to.getTokenIndex(),
				text);
	}

	public void delete(int index) {
		delete(DEFAULT_PROGRAM_NAME, index, index);
	}

	public void delete(int from, int to) {
		delete(DEFAULT_PROGRAM_NAME, from, to);
	}

	public void delete(Token indexT) {
		delete(DEFAULT_PROGRAM_NAME, indexT, indexT);
	}

	public void delete(Token from, Token to) {
		delete(DEFAULT_PROGRAM_NAME, from, to);
	}

	public void delete(String programName, int from, int to) {
		replace(programName,from,to,null);
	}

	public void delete(String programName, Token from, Token to) {
		replace(programName,from,to,null);
	}

	public int getLastRewriteTokenIndex() {
		return getLastRewriteTokenIndex(DEFAULT_PROGRAM_NAME);
	}

	protected int getLastRewriteTokenIndex(String programName) {
		Integer I = (Integer)lastRewriteTokenIndexes.get(programName);
		if ( I==null ) {
			return -1;
		}
		return I.intValue();
	}

	protected void setLastRewriteTokenIndex(String programName, int i) {
		lastRewriteTokenIndexes.put(programName, new Integer(i));
	}

	protected TreeMap<Integer, OpsAtIndexEntry> getProgram(String name) {
		TreeMap<Integer, OpsAtIndexEntry> is = (TreeMap<Integer, OpsAtIndexEntry>)programs.get(name);
		if ( is==null ) {
			is = initializeProgram(name);
		}
		return is;
	}

	private TreeMap<Integer, OpsAtIndexEntry> initializeProgram(String name) {
		TreeMap<Integer, OpsAtIndexEntry> is = new TreeMap<Integer, OpsAtIndexEntry>();
		programs.put(name, is);
		return is;
	}

	public String toOriginalString() {
		return toOriginalString(MIN_TOKEN_INDEX, size()-1);
	}

	public String toOriginalString(int start, int end) {
		StringBuffer buf = new StringBuffer();
		for (int i=start; i>=MIN_TOKEN_INDEX && i<=end && i<tokens.size(); i++) {
			buf.append(get(i).getText());
		}
		return buf.toString();
	}

	public String toString() {
		return toString(MIN_TOKEN_INDEX, size()-1);
	}

	public String toString(String programName) {
		return toString(programName, MIN_TOKEN_INDEX, size()-1);
	}

	public String toString(int start, int end) {
		return toString(DEFAULT_PROGRAM_NAME, start, end);
	}

	public String toString(String programName, int start, int end) {
		TreeMap rewrites = (TreeMap)programs.get(programName);
		if ( rewrites==null || rewrites.size()==0 ) {
			return toOriginalString(start,end); // no instructions to execute
		}
		StringBuffer buf = new StringBuffer();

		SortedMap subMap;
		
		end = Math.min(end, this.size()-1);
		start = Math.max(this.MIN_TOKEN_INDEX, start);

		// "Pre-Start" segment
		// do Replace operations that range over the start index
		// There should be only zero or one eligible operation
        // If there are Insertions before that execute them as well
        // This segment will end when we have reached the Start or when a Replace Op has taken us beyond Start
		subMap = rewrites.headMap(start);
		for (OpsAtIndexEntry entry : (Collection<OpsAtIndexEntry>)subMap.values()) {
			// output all inserts regardless
			for (int i=0; i<entry.inserts.size();i++) {
				entry.inserts.get(i).execute(buf);
			}
			if(entry.replace != null							// if we do have a replace operation
					&& entry.replace.lastIndex >= start) 	// and start falls in the replace range
			{
				start = entry.replace.execute(buf);
				break;
				
			}			
		}

		// Main segment
		// ouput all inserts, replaces and token falling into start-end
		int tokenCursor=start;
		for (; tokenCursor <=end; )
		{
			OpsAtIndexEntry entry = (OpsAtIndexEntry)rewrites.get(tokenCursor);
			if (entry != null) {
				for (int i = 0; i < entry.inserts.size(); i++) {
					entry.inserts.get(i).execute(buf);
				}
				if(entry.replace != null) {
					tokenCursor = entry.replace.execute(buf);
					continue;
				}
			}

			// dump the token at this index
			buf.append(this.get(tokenCursor).getText());
			tokenCursor++;
		}
		
		// "Post-End" segment
		// Output all Insert operations beyond the last token index
        // This segment will end when there are no more Insertions beyond End
		subMap = rewrites.tailMap(tokenCursor);
		for (OpsAtIndexEntry entry : (Collection<OpsAtIndexEntry>)subMap.values()) {
			// output all inserts regardless
			for (int i=0; i<entry.inserts.size();i++) {
				entry.inserts.get(i).execute(buf);
			}
			// replace operations have no effect in this segment	
		}

		return buf.toString();
	}

	public String toDebugString() {
		return toDebugString(MIN_TOKEN_INDEX, size()-1);
	}

	public String toDebugString(int start, int end) {
		StringBuffer buf = new StringBuffer();
		for (int i=start; i>=MIN_TOKEN_INDEX && i<=end && i<tokens.size(); i++) {
			buf.append(get(i));
		}
		return buf.toString();
	}
}
/*
 [The "BSD licence"]
 Copyright (c) 2005-2006 Terence Parr
 All rights reserved.

 Redistribution and use in source and binary forms, with or without
 modification, are permitted provided that the following conditions
 are met:
 1. Redistributions of source code must retain the above copyright
    notice, this list of conditions and the following disclaimer.
 2. Redistributions in binary form must reproduce the above copyright
    notice, this list of conditions and the following disclaimer in the
    documentation and/or other materials provided with the distribution.
 3. The name of the author may not be used to endorse or promote products
    derived from this software without specific prior written permission.

 THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
 IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
 OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
 IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
 INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
 NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
 THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.antlr.test;

import org.antlr.runtime.ANTLRStringStream;
import org.antlr.runtime.CharStream;
import org.antlr.runtime.TokenRewriteStream;
import org.antlr.tool.Grammar;
import org.antlr.tool.Interpreter;

public class TestTokenRewriteStream extends BaseTest {

    /** Public default constructor used by TestRig */
    public TestTokenRewriteStream() {
    }
	public void testInsertBeforeIndex0() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(0, "0");
		String result = tokens.toString();
		String expecting = "0abc";
		assertEquals(result, expecting);
	}

	public void testInsertAfterLastIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertAfter(2, "x");
		String result = tokens.toString();
		String expecting = "abcx";
		assertEquals(result, expecting);
	}

	public void test2InsertBeforeAfterMiddleIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(1, "x");
		tokens.insertAfter(1, "x");
		String result = tokens.toString();
		String expecting = "axbxc";
		assertEquals(result, expecting);
	}

	public void testReplaceIndex0() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(0, "x");
		String result = tokens.toString();
		String expecting = "xbc";
		assertEquals(result, expecting);
	}

	public void testReplaceLastIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, "x");
		String result = tokens.toString();
		String expecting = "abx";
		assertEquals(result, expecting);
	}

	public void testReplaceMiddleIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(1, "x");
		String result = tokens.toString();
		String expecting = "axc";
		assertEquals(result, expecting);
	}

	public void test2ReplaceMiddleIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(1, "x");
		tokens.replace(1, "y"); // this one should overwrite the previous replace
		String result = tokens.toString();
		String expecting = "ayc";
		assertEquals(result, expecting);
	}

	public void testReplaceThenDeleteMiddleIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(1, "x");
		tokens.delete(1); // this should overwrite the previous replace
		String result = tokens.toString();
		String expecting = "ac";
		assertEquals(result, expecting);
	}

	public void testReplaceThenInsertSameIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(0, "x");
		tokens.insertBefore(0, "0"); // inserts are performed before replace operations
		String result = tokens.toString();
		String expecting = "0xbc";
		assertEquals(result, expecting);
	}

	/**
	 * WARNING: the test is inconsistent with the documentation
	 * @throws Exception
	 */
	public void testReplaceThen2InsertSameIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(0, "x"); // replaces are done after the inserts for the same index
		tokens.insertBefore(0, "y"); // inserts are performed before replace operations
		tokens.insertBefore(0, "z"); // inserts are ordered by the order they're added: first come, first printed
		String result = tokens.toString();
		String expecting = "yzxbc"; // JS: Corrected for proper insertion
		assertEquals(result, expecting);
	}

	public void testInsertThenReplaceSameIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(0, "0"); // the insert is executed first
		tokens.replace(0, "x");		// then followed by a replace
		String result = tokens.toString();
		String expecting = "0xbc";
		assertEquals(result, expecting);
	}

	/**
	 * WARNING: the test is inconsistent with the documentation
	 * @throws Exception
	 */
	public void test2InsertMiddleIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(1, "x");
		tokens.insertBefore(1, "y");
		String result = tokens.toString();
		String expecting = "axybc";	//JS: corrected for proper insertion
		assertEquals(result, expecting);
	}

	/**
	 * WARNING: the test is inconsistent with the documentation
	 * @throws Exception
	 */
	public void test2InsertThenReplaceIndex0() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(0, "x");
		tokens.insertBefore(0, "y");
		tokens.replace(0, "z");
		String result = tokens.toString();
		String expecting = "xyzbc";	//JS: corrected for proper insertion
		assertEquals(result, expecting);
	}

	public void testReplaceThenInsertBeforeLastIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, "x");		// the replace will be done after any inserts at that index
		tokens.insertBefore(2, "y");	// the inserts are executed before any replaces at that index
		String result = tokens.toString();
		String expecting = "abyx";
		assertEquals(result, expecting);
	}

	public void testInsertThenReplaceLastIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(2, "y"); // the inserts are executed before any replaces at that index
		tokens.replace(2, "x");	// the replace will be done after any inserts at that index
		String result = tokens.toString();
		String expecting = "abyx";
		assertEquals(result, expecting);
	}

	public void testReplaceThenInsertAfterLastIndex() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abc");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, "x");
		tokens.insertAfter(2, "y"); // this is equal to insertBefore(3, "y")
		String result = tokens.toString();
		String expecting = "abxy";
		assertEquals(result, expecting);
	}


	public void testReplaceRangePrintAfter() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "x"); 	// replaces tokens from index 2 to 4 inclusive
		String result = tokens.toString(3,5);
		String expecting = "xb"; // JS: corrected for start/end
		assertEquals(expecting, result);
	}
    

	public void testReplaceRangeThenInsertInMiddle() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "x");
		tokens.insertBefore(3, "y"); // no effect; can't insert in middle of replaced region
		String result = tokens.toString();
		String expecting = "abxba";
		assertEquals(result, expecting);
	}
	
	public void testReplaceRangeThenInsertAtLeftEdge() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "x");
		tokens.insertBefore(2, "y");
		String result = tokens.toString();
		String expecting = "abyxba";
		assertEquals(result, expecting);
	}

	public void testReplaceRangeThenInsertAtRightEdge() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "x");
		tokens.insertBefore(4, "y"); // no effect; within range of a replace
		String result = tokens.toString();
		String expecting = "abxba";
		assertEquals(result, expecting);
	}

	public void testReplaceRangeThenInsertAfterRightEdge() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "x");
		tokens.insertAfter(4, "y");
		String result = tokens.toString();
		String expecting = "abxyba";
		assertEquals(result, expecting);
	}

	public void testReplaceAll() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(0, 6, "x");
		String result = tokens.toString();
		String expecting = "x";
		assertEquals(result, expecting);
	}

	public void testReplaceSubsetThenFetch() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "xyz");
		String result = tokens.toString(0,6);
		String expecting = "abxyzba";
		assertEquals(result, expecting);
	}

	public void testReplaceThenReplaceSuperset() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "xyz");
		tokens.replace(2, 5, "foo"); // kills previous replace
		String result = tokens.toString();
		String expecting = "abfooa";
		assertEquals(result, expecting);
	}

	public void testReplaceThenReplaceLowerIndexedSuperset() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 4, "xyz");
		tokens.replace(1, 3, "foo"); // executes first since 1<2; then ignores [EMAIL PROTECTED] as it skips over 1..3
		String result = tokens.toString();
		String expecting = "afoocba";
		assertEquals(result, expecting);
	}
    
	public void testReplaceSingleMiddleThenOverlappingSuperset() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abcba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.replace(2, 2, "xyz");
		tokens.replace(0, 3, "foo");
		String result = tokens.toString();
		String expecting = "fooa";
		assertEquals(result, expecting);
	}
	
	// JS: added for toString with start and end
	public void testTokenRewriteStreamReplaceAndInsertBeyondStartEnd() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(0, "x"); // Insert before start: should be inserted regardless
		tokens.replace(0, 1, "y"); // The range does not overlap the start
		tokens.replace(1, 3, "z"); // The range overlaps the start
		tokens.insertBefore(2, "f"); // No effect: falls in the previous replace range
		tokens.insertBefore(4, "g"); // valid insertion
		tokens.insertBefore(5, "h"); // Insert beyond end: should be inserted regardless
		tokens.replace(5, 7, "i"); // No effect: Replace beyond end
		String result = tokens.toString(2, 4);
		String expecting = "xzgbh";
		assertEquals(result, expecting);
	}

	// JS: added for toString with start and end
	public void testTokenRewriteStreamOverlappingPriorReplacesBeforeStart() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(0, "x"); // Insert before start: should be inserted regardless
		tokens.replace(0, 2, "y"); // The range overlaps the start
		tokens.replace(1, 2, "z"); // No effect: falls in the previous replace range
		tokens.insertBefore(2, "f"); // No effect: falls in the previous replace range
		tokens.insertBefore(4, "g"); // valid insertion
		tokens.insertBefore(5, "h"); // Insert beyond end: should be inserted regardless
		tokens.replace(5, 7, "i"); // No effect: Replace beyond end
		String result = tokens.toString(2, 4);
		String expecting = "xycgbh";
		assertEquals(result, expecting);
	}

	// JS: added for toString with start and end
	public void testTokenRewriteStreamInsertBeforeStart() throws Exception {
		Grammar g = new Grammar(
			"lexer grammar t;\n"+
			"A : 'a';\n" +
			"B : 'b';\n" +
			"C : 'c';\n");
		CharStream input = new ANTLRStringStream("abccba");
		Interpreter lexEngine = new Interpreter(g, input);
		TokenRewriteStream tokens = new TokenRewriteStream(lexEngine);
		tokens.LT(1); // fill buffer
		tokens.insertBefore(0, "x"); // Insert before start: should be inserted regardless
		tokens.insertBefore(2, "y"); // No effect: falls in the previous replace range
		tokens.insertBefore(4, "g"); // valid insertion
		tokens.insertBefore(5, "h"); // Insert beyond end: should be inserted regardless
		tokens.replace(5, 7, "i"); // No effect: Replace beyond end
		String result = tokens.toString(2, 4);
		String expecting = "xyccgbh";
		assertEquals(result, expecting);
	}
}
_______________________________________________
antlr-dev mailing list
[email protected]
http://www.antlr.org:8080/mailman/listinfo/antlr-dev

Reply via email to