Hi Sunil,
Thanks for the response. Do you mean OCFS2 is blocking writes from multiple
clients? Is that how OCFS2 works? I can understand that writing the (2) 20G
files might take longer with ordered option as data needs to be flushed to
the FS before journal commit, but why is that blocking a
Because in this case the cluster lock may be waiting for the journal
commit to complete. It depends on where the file is being created,
what internal metadata blocks need to be locked, etc. Your dd is not
a simple write. It is a create + allocation + write. If the file already
exists, then the
Hi,
OS - SLES 11.1 with HAE
OCFS2 - 1.4.3-0.16.7
Cluster stack - Pacemaker
I have Heartbeat Filesystem monitor that monitors the OCFS2 file system for
availability. This monitor kicks in every minute and tries to write a file
using dd as below.
dd
Use writeback. Ordered data requires the data to be flushed
before journal commit. And flushing 40G takes time.
mount -t data=writeback DEVICE PATH
On 10/20/2011 03:05 PM, Prakash Velayutham wrote:
Hi,
OS - SLES 11.1 with HAE
OCFS2 - 1.4.3-0.16.7
Cluster stack - Pacemaker
I have Heartbeat