On Thursday, September 11, 2014 4:48:24 PM UTC+12, ZyX wrote:

> Is this reproducible if you run vim multiple times in a row for each file 
> separately? How many times this file may be duplicated in your free RAM?

Completely reproducible, here's three runs in a row:

$ time vim -u NONE g2.txt -c q
real    0m54.847s
user    0m54.478s
sys     0m0.356s
$ time vim -u NONE g2.txt -c q
real    0m54.802s
user    0m54.375s
sys     0m0.392s
$ time vim -u NONE g2.txt -c q
real    0m54.903s
user    0m54.510s
sys     0m0.376s

The file is only 104 MiB, my computer has about 2 GiB unused, it's always 
coming from cache.

I've tried lots of permutations.  Interestingly,
$ sort -r google_5000000.txt -o gr.txt
$ time vim -u NONE gr.txt -c q

real    0m2.456s
user    0m2.171s
sys     0m0.276s
 
Also, I tried chopping up the file to see if I could identify some text that 
caused the slowdown.  After removing a large chunk of the beginning and end, 
about 1,000,000 lines, the time dropped to 27 s.  But then I chopped the 
4,000,000 lines into 2 equal pieces, each loads in about 1 s.  If I start vim 
on the first half then :r the second, it's finished in a second or two. If I 
then concatenate those in reverse order it loads in 2 s. The other way, 25 s.

Regards, John Little

-- 
-- 
You received this message from the "vim_use" maillist.
Do not top-post! Type your reply below the text you are replying to.
For more information, visit http://www.vim.org/maillist.php

--- 
You received this message because you are subscribed to the Google Groups 
"vim_use" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to