On Aug 5, 2021, at 13:29, Nathan Dauchy - NOAA Affiliate 
<[email protected]<mailto:[email protected]>> wrote:

Andreas, thanks as always for your insight.  Comments inline...

On Thu, Aug 5, 2021 at 10:48 AM Andreas Dilger 
<[email protected]<mailto:[email protected]>> wrote:
On Aug 5, 2021, at 09:28, Nathan Dauchy via lustre-discuss 
<[email protected]<mailto:[email protected]>> wrote:
Question:  Is it possible that a flash journal device on an ext4 filesystem can 
reach a point where there are not enough clean blocks to write to, and they can 
suffer from very degraded write performance?
For the external journal device, this _shouldn't_ happen, in the sense that the 
writes to this device are pretty much always sequential (except updates to the 
journal superblock), so as long as there is an erase block that can be cleaned 
in advance of the next overwrite, it should be essentially "ideal" usage for 
flash.

I have seen flash devices in the past that are badly behaved and have large 
spikes in latency when they have to erase blocks when under write pressure.  
(this is why periodic fstrim is useful in general)  My concern is that we are 
dealing with another such device.  It _might_ be the case that the journal 
device has finally been in service long enough that it has used up all it's 
clean blocks and now needs to erase "on demand" and hence are performing worse. 
 (evidence for this is that some newer OSTs added to the same file system do 
not have the same slowdown)
I know that "fstrim" can be run for mounted ldiskfs file systems, but when I 
try that it doesn't see the OSTs as using flash, because they are primarily 
HDD-based.  Is there some other way to tell the system which blocks can be 
discarded on the journal flash device?  (I found "blkdiscard" but that seems 
heavyweight and dangerous.)
I don't _think_ you can run fstrim against the journal device directly while it 
is mounted.  However, you could unmount the filesystem cleanly (which flushes 
everything from the journal, check no "needs_recovery" feature is set), remove 
the journal from the filesystem, trim/discard the journal block device, then 
reformat it as an external journal device again and add it back to the 
filesystem.

That confirms my understanding.  I may end up going down that path as a good 
test.  Doesn't sound like much fun though. ;)

Another related question would be how to benchmark the journal device on it's 
own, particularly write performance, without losing data on an existing file 
system; similar to the very useful obdfilter-survey tool, but at a lower level. 
 But I am primarily looking to understand the nuances of flash devices and 
ldiskfs external journals a bit better.
While the external journal device has an ext4 superblock header for 
identification (UUID/label), and a feature flag that prevents it from being 
mounted/used directly, it is not really an ext4 filesystem, just a flat "file". 
 You'd need to remove it from the main ext4/ldiskfs filesystem, reformat it as 
ext4 and mount locally, and then run benchmarks (e.g. "dd" would best match the 
JBD2 workload, or fio if you want random IOPS) against it.  You could do this 
before/after trim (could use fstrim at this point) to see if it affects the 
performance or not.

OK, thanks for confirming that there is no magic ext4 journal benchmarking 
tool.  I'll stop searching.  ;-)

Note that there *are* some journal commit statistics - /proc/fs/jbd2/<dev>/info 
that you might be able to compare between devices.  Probably the most 
interesting is "average transaction commit time", which is how long it takes to 
write the blocks to the journal device after the transaction starts to commit.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud







_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to