On Dec 19 2011, 6:07 am, Дмитрий Франк <[email protected]> wrote: > line2byte() does not care multi-byte characters. > > For example, if my buffer has file-encoding utf-8, and there's some > cyrillic characters in the buffer (each cyrillic charater takes 2 bytes), > then line2byte('.') returns wrong result (it doesn't care about multi-byte > characters) >
It later came out that the OP was using a "weird" encoding configuration, with 'enc' set to cp1251 and 'fenc' to utft-8. But in case anyone is dismissing the issue as a misconfiguration or misuse of a documented function, I encountered issues with eclim (also the driving force for the OP) with a fairly normal configuration of enc=utf-8 and fenc=latin1. The patch to the eclim source to fix this creates a vimscript function to loop over the entire buffer, converting each line with iconv to count the bytes, as suggested in this thread. I haven't tried it out yet; I assume it works, but I also expect some sort of performance hit. I don't notice anything about line2byte() in the todo list, so I'm expressing my support again either for a new function, or an optional argument to line2byte(); either telling it to use 'fileencoding' or giving it an encoding to use. -- You received this message from the "vim_dev" maillist. Do not top-post! Type your reply below the text you are replying to. For more information, visit http://www.vim.org/maillist.php
