On 02/07/2017 02:32 AM, Niklas Edmundsson wrote:
O_DIRECT also bypasses any read-ahead logic, so you'll have to do nice
and big IO etc to get good performance.

Yep, confirmed... my naive approach to O_DIRECT, which reads from the file in the 8K chunks we're used to from the file bucket brigade, absolutely mutilates our performance (80% slowdown) *and* rails the disk during the load test. Not good.

(I was hoping that combining the O_DIRECT approach with in-memory caching would give us the best of both worlds. Nope. A plain read() with no explicit caching at all is much, much faster on my machine.)

We've played around with O_DIRECT to optimize the caching process in our
large-file caching module (our backing store is nfs). However, since all
our hosts are running Linux we had much better results with doing plain
reads and utilizing posix_fadvise with POSIX_FADV_WILLNEED to trigger
read-ahead and POSIX_FADV_DONTNEED to drop the original file from cache
when read (as future requests will be served from local disk cache).
We're doing 8MB fadvise chunks to get full streaming performance when
caching large files.

Hmm, I will keep the file advisory API in the back of my head, thanks for that.

For the next step, I want to find out why TLS connections see such a big performance hit when I switch off mmap(), but unencrypted connections don't... it's such a huge difference that I feel like I must be missing something obvious.

--Jacob

Reply via email to