@Sunil
So could you recommand generally mount options to set? At the moment
i've set none.
So is it always a good idea (don't need atime) to set
data=writeback,noatime,nodiratime any other options?
Stefan
> Did you set the mount option on both nodes or only on the node
> on which you were doin
Yes, I have the mount options the same on both nodes.
Am I understanding you correctly in that, this is normal
behaviour because node2 is forcing the journal commit?
If so that is fine, I just need to know so I can stop
searching for options.
I REALLY appreciate your looking at this. Thank you
Did you set the mount option on both nodes or only on the node
on which you were doing the ls?
Setting it on both nodes, or on the node that is doing the cp should
solve the perf issue. What's happening is that the ls on node2 is forcing
node1 to journal commit. With the ordered data journal mode
Yes, I am finding that if I do the large file copy on node1 and
do an ls -l on node1 it is very fast as expected.
If I do the large file copy on node1 and do an ls -l on node2
ls -l is showing multi second times. 5+ seconds at least.
If I do a file listing on any other file it is fast regardless
There should be no conflict.
On 05/24/2011 11:32 AM, Keith W wrote:
> I have a lab system that is currently running Oracle RAC 11g
> with ASM volumes and grid infrastructure
>
> Is it possible to have an ocfs2 cluster running and accessing
> a different disk as well as the oracle clustering for RA
Writeback will help if the writes are on one node and the ls on another.
It is not clear if that is the case or not.
If both ops are on the same node, then it just could be the disk is slow.
The times shows almost all wall time. Very little sys and no user. top
will show io wait times.
On 05/24/2
No change in behavior.
My mount options
/dev/sdj1 /u03ocfs2 _netdev,noatime,data=writeback,nointr 0 0
+---+
+ Keith +
+---+
On Tue, 24 May 2011, Sunil Mushran wrote:
> Repeat the same test but with volu
I have a lab system that is currently running Oracle RAC 11g
with ASM volumes and grid infrastructure
Is it possible to have an ocfs2 cluster running and accessing
a different disk as well as the oracle clustering for RAC with
the ASM's? Or will there be a conflict?
+-
Repeat the same test but with volumes mounted with data=writeback
mount option.
mount -o data=writeback /dev/sdX /path
On 05/24/2011 07:11 AM, Keith W wrote:
> Hello list.
> Apologies in advance, this may be a bit long. Just trying to give
> as much info as I can at the outset.
>
> I have a two n
Hello list.
Apologies in advance, this may be a bit long. Just trying to give
as much info as I can at the outset.
I have a two node setup that share a 500Gig SAS drive via ocfs2.
When I move either large files 300Megs+ or a large number of smaller files
onto or off of the volume, my terminal s
10 matches
Mail list logo