On Jan 3 09:56, Brian Ford wrote: > On Wed, 3 Jan 2007, Corinna Vinschen wrote: > > > So it appears to make much sense to set the blocksize to 64K. > > blocksize is not really the proper term here as it is very confusing. > Preferred or optimal I/O size is a better choice in my opinion. > > > The only question would be whether to use getpagesize() or a hard coded > > value. It seems to me that the 64K allocation granularity and using 64K > > as buffer size in disk I/O coincide so I tend to agree that it makes > > sort of sense to use getpagesize at this point. > > More supporting evidence from > http://research.microsoft.com/BARC/Sequential_IO/Win2K_IO_MSTR_2000_55.doc : > > ...each (8KB) buffered random write is actually a 64KB random read and > then an 8KB write. When a buffered write request is received, the cache > manager memory maps a 256KB view into the file. It then pages in the 64KB > frame continuing the changed 8KB, and modifies that 8KB of data. This > means that for each buffered random write includes one or more 64KB reads. > The right side of Figure 11 shows this 100% IO penalty.
Interesting. I just applied a patch along the lines of your patch and what we discussed in this thread. Thanks, Corinna -- Corinna Vinschen Please, send mails regarding Cygwin to Cygwin Project Co-Leader cygwin AT cygwin DOT com Red Hat
