First, thanks very much for creating VIM!  I have been using it on Linux 
systems for years, and now use it via cygwin at home as well.  I vastly prefer 
VIM to EMACS, especially at home.  I learned vi on a VAX/VMS system long ago (a 
friend of mine had ported it), when our computer science department was loading 
so many people on the VAXen that EDT was rendered unusably slow.  I still like 
VIM largely because I can do so much with so little effort in so little time.

That brings me to my question.  I have noticed that when editing large files 
(millions of lines), deleting a large number of lines (say, hundreds of 
thousands to millions) takes an unbelieveably long time in VIM--at least on my 
systems.  This struck me as so odd, I looked you up (for the first time in all 
my years of use) so I could ask why!

Seriously, going to line 1 million of a 2 million line file and typing the command ":.,$d" takes 
_minutes_ on my system (Red Hat Linux on a 2GHz Athlon processor (i686), 512kb cache, 3 Gb memory), far 
longer than searching the entire 2 million line file for a single word (":g/MyQueryName/p").  Doing 
it this way fits way better into my usual workflow than using "head -n 1000000", because of course 
I'm using a regular expression search to determine that I
want to truncate my file at line 1000000 in the first place.

I looked in the archive, and couldn't see that this issue had been raised 
before.  Is there any chance it can get added to the list of performance 
enhancement requests?

Thanks,

Max Robinson, PhD

Reply via email to