Nikolai Weibull wrote:
On 1/30/07, Martin Krischik <[EMAIL PROTECTED]> wrote:
Am Dienstag 30 Januar 2007 schrieb [EMAIL PROTECTED]:

> - being able to open very large files quickly and
> without using too much memory. This could possibly
> be achieved by not loading the entire file immediately.
> File could be loaded lazily when required.

The last (and only) editor to have that feature was The Atari Editor which run
on 8 bit Atari computers. Was a full screen, modeing editor like vim as
well :-).

How do you mean?  A lot of editors work like this.  The Atari Editor
is hardly the first, or last, editor to work this way.  Sam works this
way, Wily works this way, my editor ned works this way, James Brown's
example editor "Neatpad" [1] works this way.

It's usually down to the data structure used to manage buffer contents
that defines whether it's possible to implement this feature or not.
It's can also be down to support for various encodings that can
require preprocessing of the files.

I'm not sure to what degree Vim requires the whole file to be read
before editing can commence.  I'm, however, sure that it can be made
to load files without too much preprocessing, but I'm also pretty sure
that it would require a lot of work and I don't think it's worth the
kind of time that Bram would have to invest in such a feature.  Vim
is, when it all comes down to it, designed to be a programmers editor,
and that means that it'll mostly work with files smaller, mostly much
smaller, than a megabyte, for which preprocessing works fine.

 nikolai

 nikolai

[1] http://www.catch22.net/


IIUC, Vim loads the whole file in memory before starting to edit. It might be possible (but not necessarily worth the trouble on modern computers with their large memory and their huge virtual-memory addressing ranges) to only keep parts of the file in memory; but: - depending on the syn-sync setting, it may be necessary to scan part or all of the file in front of the edit point, even repeatedly, in order to synchronize the syntax highlighting properly. - if many scattered changes are made without saving the file, they may have to be written to the (Vim) swapfile, then later read from swap, causing a performance degradation over time. (I realize that for files which are larger than the available RAM, "reading the whole file in memory" actually always includes some virtual memory, which is OS swap, and not necessarily better managed than Vim swap.) - A command such as :$ or G (goto last line) can be implemented by seeking to EOF then scanning backwards; but for :8752 or 8752G (go to line 8752) I see no other possibility than counting the 8751 first ends-of-lines (if there are that many, of course), which means scanning the whole file until there. Of course, any search also requires scanning from the current location to the next match in the search direction (and the whole file if there is no match and wraparound is set). Loading the whole file in memory at the start allows building an index (or something) which will later allow lightning-fast access to any line given by number. I see this as an advantage when line numbers are known, e.g. when trying to evaluate a patch by looking at the parts of the source that it will change if applied, or when using a tagfile with line numbers (as opposed to a tagfile with search patterns). (And, yes, the index could be built incrementally as later parts of the file are accessed, but then a forward seek might seem to "hang" just because it goes to a part of the file not yet read from disk.)

Vim is not only a programmer's editor (in the sense of an editor which can be used to edit source programs: even Notepad can do that). It can do any kind of editing, and it is particularly useful for complex editing tasks. If it is a programmer's editor, it is most importantly an editor which can be programmed (in five programming languages [six in version 7] including vimscript, which is a full structured-programming language for text, string, and integer processing). Unlike many other editors, it can handle any kind of text, including Unicode text, even if the underlying OS has no input method usable for arbitrary Unicode codepoints. The biggest file I'm currently using it for is 33.8 million bytes long. That file does take some time to load, and searching when there is no match, or no nearby match, does take a measurable time; but IMHO it remains "bearable".


Best regards,
Tony.

Reply via email to