Re: A performance question

2007-05-25 Thread Robert M Robinson


No, I implied vim has more uses than any one person could possibly imagine.

I also meant any question like "Why would anyone want ...?" really just
means "I can't imagine wanting ", so if that isn't what you meant to
say you might want to rephrase your question.  I would ask why anyone
would want to say they had a limited imagination, but if I did I'd be
doing it myself!

If anyone took offense, my apologies; I meant it as a wry observation on
how people in general use language, not as anything personal.

Max

On Fri, 25 May 2007, Yakov Lerner wrote:

|I think Robert implied that it takes lot of imagination
|to use vim on multi-gigabyte size. I might be wrong.
|
|I don't exactly understand the connection size of one's
|imagination and size of the file on which one applies vim.
|But the connection is perfectly possible. For example, I never tried to
|run vim on anything bigger than 0.5GB and I do indeed have
|average or lesser than average imagination.
|
|Hell starting tomorrow, I am going to vim the 2+0.2*day_count sized
|files, every day,
|It only remains to buy imagine-o-meter, and apply it daily.
|
|Yakov "average-sized imagination" Lerner
|





Re: A performance question

2007-05-25 Thread Robert M Robinson

My statements were meant to say I find vim very useful.  grep and sed are
great; I use grep all the time, and sed occasionally (because I'm usually
looking at large files rather than editing them).  vim is just more
convenient for looking at the lines above and below a regular expression
match, especially when I don't know how many lines might be relevant.

I believe vim treats the entire text file as if it is in a single,
contiguous block of (virtual) memory; I'm less clear on how vim manages
how much of that memory is resident in physical memory.  If a 2 Gb PC
has only 2 Gb of swap space in addition to 2 Gb physical memory, my
prediction is you will have trouble editing files 4 Gb or larger.
I'm catching up on email, so maybe someone has already covered this in
more detail or more accuracy.

On Thu, 24 May 2007, Yongwei Wu wrote:
|
|I do not understand your statements: what's your problem of using
|regular expressions in grep and sed?
|
|Other related questions are: Does Vim really load the entire text file
|in memory? Has anybody experience editing files that are much bigger
|than available memory, say, an 8 GB file in a 2 GB PC?
|
|Best regards,
|
|Yongwei
|
|-- 
|Wu Yongwei

|URL: http://wyw.dcweb.cn/
|





Re: A performance question

2007-05-24 Thread Robert M Robinson


On Wed, 23 May 2007, fREW wrote:
|Someone recently was emailing the list about looking at a small
|section of DNA with vim as text and it was a number of gigs.  I think
|he ended up using other unix tools (sed and grep I think), but
|nontheless, text files can be big too ;-)
|
|-fREW
|

A maxim that comes up here is "A lack of imagination doesn't prove anything."
The fact that Condoleeza Rice couldn't imagine the degree of chaos that would
ensue if we invaded Iraq does not prove that Iraq is not currently in chaos!

I use vim for _structured_ text files, largely because regular expression
search is much more useful than word search when the text is structured.
Whether those files are large or not usually depends on whether I'm editing
programs (small) or viewing/editing their output (often quite large).  Emacs
also provides regular expression search, but I find vim's commands simpler
and easier to type--and therefore faster to use.

I happen to be working with DNA sequences now, however I've used vim for
the same things working on compilers, sonar data, seismic data, and databases.
The larger the file, the more likely I am to choose vim to look at it.

Max




A performance question

2007-05-22 Thread Robert M Robinson


First, thanks very much for creating VIM!  I have been using it on Linux 
systems for years, and now use it via cygwin at home as well.  I vastly prefer 
VIM to EMACS, especially at home.  I learned vi on a VAX/VMS system long ago (a 
friend of mine had ported it), when our computer science department was loading 
so many people on the VAXen that EDT was rendered unusably slow.  I still like 
VIM largely because I can do so much with so little effort in so little time.

That brings me to my question.  I have noticed that when editing large files 
(millions of lines), deleting a large number of lines (say, hundreds of 
thousands to millions) takes an unbelieveably long time in VIM--at least on my 
systems.  This struck me as so odd, I looked you up (for the first time in all 
my years of use) so I could ask why!

Seriously, going to line 1 million of a 2 million line file and typing the command ":.,$d" takes 
_minutes_ on my system (Red Hat Linux on a 2GHz Athlon processor (i686), 512kb cache, 3 Gb memory), far 
longer than searching the entire 2 million line file for a single word (":g/MyQueryName/p").  Doing 
it this way fits way better into my usual workflow than using "head -n 100", because of course 
I'm using a regular expression search to determine that I
want to truncate my file at line 100 in the first place.

I looked in the archive, and couldn't see that this issue had been raised 
before.  Is there any chance it can get added to the list of performance 
enhancement requests?

Thanks,

Max Robinson, PhD