Hi,
 
i have a problem with my ocfs2 cluster filesystem. I have 3 nodes, running with CentOS 7.
 
The cluster runs since 1 year without problems. Last week, one colleague deletes about 100.000 files on the Filesystem with "rm *" in some folders.
Since that, the kworker io is about -50-80% usage (iotop), also iostat have the same value.
 
If i list all "fs_locks", i get about 190.000 entries:
echo "fs_locks"  | debugfs.ocfs2 -n /dev/sdb | grep "Lockres" | wc -l
 
That are too many locks?
 
As example, i get the following output:
Lockres: M00000000000000062e723300000000  Mode: Exclusive
Flags: Initialized Attached
RO Holders: 0  EX Holders: 0
Pending Action: None  Pending Unlock Action: None
Requested Mode: Exclusive  Blocking Mode: Invalid
PR > Gets: 16  Fails: 0    Waits Total: 657us  Max: 654us  Avg: 41072ns
EX > Gets: 2  Fails: 0    Waits Total: 322us  Max: 322us  Avg: 161398ns
Disk Refreshes: 0
 
If i try to get the filename, the command hangs and i never get some output:
echo "locate <M00000000000000062e723300000000>"  | debugfs.ocfs2 -n /dev/sdb
 
How can i solve this problem?
 
Best regards,
Thomas Manninger
_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
https://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to