On Wed, Aug 01, 2007 at 07:50:51PM +0200, Axel Liljencrantz wrote:
> > maybe the problem is not one of the right algorithm to figure out what
> > to delete, but rather one of structure, and you just need to split the
> > commanline into arguments before you apply the word-boundary detection.
> This makes excellent sense. Unfortunatly, the fish tokenizer splits a
> command into separate tokens on argument boundaries. Using the
> tokenizer would lead to emulating the bash behaviour of only stopping
> on non-escaped whitespace, whereas I like the fish behaviour of also
> stopping on non-alphanumericals.

i like it too.
what i was trying to get at is:
use the tokenizer AND the move_word function.

first figure out what token the cursor is in, using the tokenizer,
then take that token and pass it to move_word. that way, move_word can
never jump token boundaries, because it never gets to see more than one
token.

greetings, martin.
-- 
cooperative communication with sTeam      -     caudium, pike, roxen and unix
offering: programming, training and administration   -  anywhere in the world
--
pike programmer   working in new zealand        open-steam.org|webhaven.co.nz
unix system-      bahai.or.at                        iaeste.(tuwien.ac|or).at
administrator     (caudium|gotpike).org                          is.schon.org
Martin Bähr       http://www.iaeste.or.at/~mbaehr/

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Fish-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/fish-users

Reply via email to