> On Mon, 11 Jul 2016, Theo de Raadt wrote: > > > No, I didn't know that. I assumed that having a few more GBs of bufcache > > > would help the performance. Until that is the case, 64bit dma does not > > > make much sense. > > > > BTW, my tests were on a 128GB sun4v machine. Sun T5140. They are > > actually fairly cheap used these days. > > > > A maximum sized buffer cache should be fast. However there is no need > > for it to be dma-reachable. Bob's buffer cache flipper can bounce it > > to high memory easily after it is read the first time, and preserve it > > in otherwise unused memory. A buffer cache object of that sort is > > never written back to the io path. Also, it can be discarded in any > > memory shortage condition without cost. > > But flipping buffers is not without cost. Especially for a SSD at rates of > >200 MB/s (or even > 500 MB/s). With 64bit DMA, one could have a large > buffer cache without this cost. But actual benchmarks would be required to > see how relevant this is.
Stefan -- you don't understand the system. Buffers are not flipped at the moment of read or write. They are read into available dma memory. They are used by process immediately, without latency. At a later time when they are about to be thrown away to (to conserve dma memory), they are not thrown away but asyncronously / low-cost flipped to high memory, and conserved. Then future reads can find that the on-disk blocks are still cached in (high) memory. DMA reachability is not required to copy that memory to processes. You are suggesting that buf storage is latency sensitive. That is not the case.