On Tue, 2017-11-21 at 05:58 +0000, Changwei Ge wrote: > Can your tell me how did you format your volume? > What's your _cluster size_ and _block size_? > Your can obtain such information via debugfs.ocfs2 <your volume> -R > 'stats' | grep 'Cluster Size' > > It's better for you provide a way to reproduce this issue so that we > can > perform some test. >
The issue recurred in our cluster today, so at best my patch is just decreasing the frequency of the crashes. Our setup has 10 machines sharing two OCFS2 mountpoints over fibre channel. Both OCFS2 partitions have block size bits set to 12 and cluster size bits set to 20. The two partitions contain around 310 files total with 200 of those being qcow2 files. The only inodes getting any read and write activity are the qcow2 files. The qcow2 files were created as sparse files (preallocation=metadata) and some are reflinked copies. It's not clear to me exactly why it's passing through ocfs2_lock_allocators() without allocating meta_ac. These qcow files wouldn't be written concurrently by different systems in the OCFS2 cluster. Is it possible the 2 x multiplier in the ocfs2_lock_allocators call is not large enough?
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Ocfs2-devel mailing list Ocfs2-devel@oss.oracle.com https://oss.oracle.com/mailman/listinfo/ocfs2-devel