Andreas,

Thanks for the response.  I ran the job on my laptop on an XFS file system. I changed the file size from 100MB to 1000MB to get the buffering numbers out of the noise.  The write of the file behaves as one would expect.  The first 500MB being written non-direct increases the buffer cache usage by 500MB.  The writing of the second 500MB of the file using direct does not increase the buffer cache usage, and the first half that was written buffered does not get dumped.

The second image below is for the Lustre run, but replacing the total amount of meminfo buffer cache with the amount of buffer cache used by each OSC, where you can see each OSC's cache usage dropping as each OSC handles its first direct write.

John


On 1/9/25 14:34, Andreas Dilger wrote:
Hi John,
can you trace to determine if this cache flush is triggered by Lustre DLM lock 
management or if this is triggered by the kernel (i.e. does ext4 or xfs have 
the same behavior with two fds)?

There was work done recently to be able to pin pages on the client so that they 
are not evicted by e.g. LDLM LRU aging:
https://jira.whamcloud.com/browse/LU-17463

but this hasn't finished landing yet.

On Dec 19, 2024, at 15:53, John Bauer<[email protected]>  wrote:
I have a file where I would like to keep part of the file in system cache and 
the other part not.  So I open the file twice, once with O_DIRECT, and once 
without O_DIRECT.
I have written a simple testcase where I write the first 50 MB of the file 
without DIRECT, and the second 50M with O_DIRECT.  The file is striped 4x1M.  
It would appear that when the second 50M of the file is written O_DIRECT, the 
first 50MB of the file is dropped from system cache.   Both fd remain open 
during the entire process.  From what I can tell, it seems that once a given 
stripe is impacted by an O_DIRECT  write then the entire stripe is dropped from 
cache. Is this the expected behavior?
Cheers, Andreas
—
Andreas Dilger
Lustre Principal Architect
Whamcloud/DDN



_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to