Thanks for the responses.  I should give more context.

I was in the process of tracking down how some data files our
applications use became so fragmented.  We use Windows 2003 server,
and have not historically used de-fragmenting software (although we
have now started to do so).  We had recently copied many gigabytes of
files from one server to another using cp.exe from the cygwin
utilities.  After some research using filemon.exe, I realized the cp
was writing the files out in 4K chunks, which caused the data to be
hopelessly fragmented on disk.  I happened to notice while I was
looking into this that gvim was doing the same thing with the .swp
file when I was opening up a 800MB log file while doing this research
(Incidentally I also noticed that it was reading and writing to the
swap file in 4K chunks when I did a search in the log file).

In experimenting with various utilities, we discovered that MSDOS copy
uses 64K chunks to copy files, and produces much less fragmented files
and did the job much faster (although it has the less desirable
behavior of getting an exclusive lock on the file while it is doing
it) .  The cygwin project has since changed cp to use 64K chunks for
performance reasons.

I thought it was at least worth a discussion, and appreciate all the
feedback.

sk
--~--~---------~--~----~------------~-------~--~----~
You received this message from the "vim_dev" maillist.
For more information, visit http://www.vim.org/maillist.php
-~----------~----~----~----~------~----~------~--~---

Raspunde prin e-mail lui