The file system on-disk image has not changed. So the 1.6 file system
software can mount the volume created with 1.2 mkfs. What you cannot do
is concurrently mount the same volume with nodes running 1.2 and 1.6
versions of the file system software.
It is not mixed mode. The 1.6 fs software will
On 02/29/2012 04:10 PM, David Johle wrote:
I too have seen some serious performance issues under 1.4, especially
with writes. I'll share some info I've gathered on this topic, take
it however you wish...
In the past I never really thought about running benchmarks against
the shared block
ocfs2console has been obsoleted. Just use the utilities directly.
To detect ocfs2 volumes, use blkid. You can use it to restrict
the lookup paths. Refer its manpage.
On 03/09/2012 06:15 PM, John Major wrote:
Hi,
Hope this is the right place to ask this.
I have set up 2 ubuntu lts machines
ocfs2 1.4 will not build with 2.6.32. A better solution is to
just enable ocfs2 in the 2.6.32 kernel src tree and build.
On 03/11/2012 07:37 AM, зоррыч wrote:
Hi.
I use scientific linux 6.2:
[root@noc-1-m77 ocfs2-1.4.7]# cat /etc/redhat-release
Scientific Linux release 6.2 (Carbon)
strace may show more. I would first confirm that my perms are correct.
On 03/15/2012 07:58 AM, ?? wrote:
I am testing the scheme of drbd and ocfs2
If you attempt to write to the cluster error:
[root@noc-1-m77 share]# mkdir 12
mkdir: cannot create directory `12': Permission denied
[mailto:ocfs2-users-boun...@oss.oracle.com] On Behalf Of зоррыч
Sent: Thursday, March 15, 2012 11:26 PM
To: 'Sunil Mushran'
Cc: ocfs2-users@oss.oracle.com
Subject: Re: [Ocfs2-users] Permission denied on ocfs2 cluster
[root@noc-1-synt /]# ls -lh | grep ocfs
drwxr-xr-x. 3 root root 3.9K Mar 15 02
Online add/remove of nodes and of global heartbeat devices has been in mainline
for over a year. I think 2.6.38+ and tools 1.8. The ocfs2-tools tree hosted on
oss.oracle.com/git has a 1.8.2 tag that can be used safely. It has been fully
tested. The user's guide has been moved to man pages
be:
else
-tmp = g_list_append(elem, cfs);
+g_list_append(elem, cfs);
Attached patch.
Thanks.
Acked-by: Sunil Mushran sunil.mush...@gmail.com
___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
https://oss.oracle.com/mailman
The 4 journal inodes got zeroed out. Do you know how/why?
Have you tried running fsck with -fy (enable writes).
fsck.ocfs2 does have a check for bad journals that it will regenerate.
JOURNAL_FILE_INVALID
OCFS2 uses JDB for journalling and some journal files exist in the system
directory. Fsck
oh crap. The dlm lock needs to lock the journals. So you need to recreate
the
journal inodes with i_size 0.
dd a good journal inode and edit it using binary editor. Change the inode
num
to the block number, zero out the i_size and next_free_extent. Repeat for
the
4 inodes.
Hopefully some one on
You may want to run a full fsck on the fs.
fsck.ocfs2 -fy /dev/
On Tue, Aug 21, 2012 at 12:49 AM, Pawel pzl...@mp.pl wrote:
Hi,
After upgrading ocfs2 my cluster is instable.
At least ones per week I can see:
kernel panic: Null pointer dereference at 00048
o2dlm_blocking_ast_wrapper +
You are probably mounting the volume with the datavolume option. Instead
use the
init.ora param, filesystemio_options for force odirect and mount the volume
without
the datavolume option. This is documented in the user's guide.
On Thu, Aug 23, 2012 at 8:14 AM, Maki, Nancy nancy.m...@suny.edu
On Thu, Aug 23, 2012 at 10:58 AM, Maki, Nancy nancy.m...@suny.edu wrote:
By default we mount all our OCFS2 volumes with datavolume. To be more
specific, the volume that we are having the issue with is not a database
volume but a shared drive for developers to read and write other types of
What is the version of the kernel, ocfs2 and ocfs2 tools?
uname -a
modinfo ocfs2
mkfs.ocfs2 --version
On Fri, Aug 24, 2012 at 1:09 PM, Rory Kilkenny rory.kilke...@ticoon.comwrote:
We have an HP P2000 G3 Storage array, fiber connected. The storage
array has a RAID5 array broken into 2
AM, Sunil Mushran sunil.mush...@gmail.comwrote:
Isn't the mount point is local to the machine?
I use iSCSI for the Block device and I mount the device (/dev/sdc1) at
/var/lib/nova/instances.
I've formated /dev/sdc1 in OCFS2 FS.
Should I use Pacemaker to manage OCFS2 ?
Thanks,
-Emilien
Forgot to add that this issue is limited to metaecc. So you could avoid the
issue in your
same setup by not enabling metaecc on the volume. And last I checked mkfs
did not
enable it by default.
On Mon, Aug 27, 2012 at 10:35 AM, Sunil Mushran sunil.mush...@gmail.comwrote:
So you are running
nfsd encountered an error reading the device. So something in the io path
below the
fs encountered a problem. If it just happened once, then you can ignore it.
On Fri, Aug 31, 2012 at 2:23 AM, Hideyasu Kojima hid.koj...@ms.scsk.jpwrote:
Hi
I using ocfs2 cluster as NFS Server.
Only once,I got
On Wed, Sep 12, 2012 at 9:45 AM, Asanka Gunasekera
asanka_gunasek...@yahoo.co.uk wrote:
Load O2CB driver on boot (y/n) [y]:
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter none to clear) [ocfs2]:
Specify heartbeat dead threshold (=7) [31]:
Specify network idle timeout in
cfs != storage
You need to get a highly available storage that is concurrently accessible
from multiple nodes.
ocfs2 will allow multiple nodes to concurrently access the same storage.
With posix semantics.
If a node dies, the remaining nodes will pause to recover and then continue
functioning.
IO error on channel means the system cannot talk to the block device. The
problem
is in the block layer. Maybe a loose cable or a setup problem.
dmesg should show errors.
On Fri, Nov 9, 2012 at 10:46 AM, Laurentiu Gosu l...@easic.ro wrote:
Hi,
I'm using ocfs2 cluster in a production
(2, \r\n, 2
On 10.11.2012 02:06, Sunil Mushran wrote:
It's either that or a check sum problem. Disable metaecc. Not sure which
kernel you are running.
We had fixed few problems few years ago around this. If your kernel is
older, then it could be
a known issue.
On Fri, Nov 9, 2012
at ocfs2_validate_meta_ecc in order to bypass the ECC checks?
On 10.11.2012 03:55, Sunil Mushran wrote:
If global bitmap is gone. then the fs is unusable. But you can extract
data using
the rdump command in debugfs.ocfs. The success depends on how much of the
device is still usable.
On Fri, Nov 9
strace -p PID -ttt -T
Attach and get some timings. The simplest guess is that the system lacks
memory to cache all the inodes
and thus has to hit disk (and more importantly take cluster locks) for the
same inode repeatedly. The user
guide has a section in NOTES explaining this.
On Tue, Dec 4,
*amaury.franc...@digora.com mailto:amaury.franc...@digora.com***
* *
*Siège Social – 66 rue du Marché Gare – 67200 STRASBOURG*
Tél : 0 820 200 217 - +33 (0)3 88 10 49 20
Description : test
*De :*Sunil Mushran [mailto:sunil.mush...@gmail.com]
*Envoyé :* mardi 4
This is normal. My only concern is the use of very old kernel/fs versions.
On Wed, Dec 5, 2012 at 3:08 AM, Neil campbell.n...@hotmail.com wrote:
Anyone?
On 2012-11-28 00:47:56 + neil campbell campbell.n...@hotmail.com
wrote:
Hi list,
I am running
The fs does not care about time. It should have no effect on the cluster.
However the apps may care and may behave erratically.
On Jan 3, 2013, at 3:13 PM, Medienpark, Jakob Rößler
roess...@medienpark.net wrote:
Hello list,
today I noticed huge differences between the hardware clocks in
1.2.5 is 6+ year old release. You may want to use something more current.
On Mon, Jan 14, 2013 at 12:06 PM, Bill Zha lfl200...@yahoo.com wrote:
Hi Sunil and All,
We have a 10 Redhat4.2-node OCFS cluster running on version 1.2.5-6. One
of the node started to rebooted almost everyday since
This is probably a directory. debugs.ocfs2 -R 'stat 52663' /dev/ will
dump the inode.
Are you sure fsck is fixing it? Does the output show this block getting
fixed?
If not, you may want to run fsck.ocfs2 v1.8. I think a fix code was added
for it.
On Wed, Feb 20, 2013 at 1:01 AM, Fiorenza
[ 1481.620253] o2hb: Unable to stabilize heartbeart on region
1352E2692E704EEB8040E5B8FF560997 (vdb)
What this means is that the device is suspect. o2hb writes are not hitting
the disk. vdb is accepting and
acknowledging the write but spitting out something else during the next
read. Heartbeat
Are you mounting -o writeback?
On Fri, Mar 29, 2013 at 12:28 PM, Andy ary...@allantgroup.com wrote:
I have been having performance issues from time to time on our
production ocfs2 volumes, so I set up a test system to try to reproduce
what I was seeing on the production systems. This is
-N 16 means 16 journals. I think it defaults to 256M journals. So that's
4G. Do you plan to mount it on 16 nodes? If not, reduce that. Other options
is a smaller journal. But you have to be careful as a small journal could
limit your write thruput.
On Mon, Apr 15, 2013 at 1:37 PM, Jerry Smith
Support for global heartbeat was added in ocfs2-tools-1.8.
On Tue, Jun 4, 2013 at 8:31 AM, Vineeth Thampi vineeth.tha...@gmail.comwrote:
Hi,
I have added heartbeat mode as global, but when I do a mkfs and mount, and
then check the mount, it says I am in local mode. Even
Can you dump the following using the 1.8 binary.
debugfs.ocfs2 -R stats /dev/mapper/.
On Fri, Jun 21, 2013 at 6:17 AM, Ulf Zimmermann u...@openlane.com wrote:
We have a production cluster of 6 nodes, which are currently running
RHEL 5.8 with OCFS2 1.4.10. We snapclone these volumes to
Hoe did you figure this out? Also, which version of the kernel are you
using?
On Wed, Jul 3, 2013 at 1:05 AM, Nicolas Michel
be.nicolas.mic...@gmail.comwrote:
Hello guys,
I'm using OCFS2 for a shared storage (on SAN). I just saw that the inode
usage is really high although these filesystems
it is not causing any problem but I found it
weird).
2013/7/3 Sunil Mushran sunil.mush...@gmail.com
That is old. It just could be a minor bug is that release. Is it causing
you any problems?
On Wed, Jul 3, 2013 at 12:31 PM, Nicolas Michel
be.nicolas.mic...@gmail.com wrote:
Hello Sunil,
I
If the storage connectivity is not stable, then dlm issues are to be
expected.
In this case, the processes are all trying to take the readlock. One
possible
scenario is that the node holding the writelock is not able to relinquish
the lock
because it cannot flush the updated inodes to disk. I
It is encountering scsi errrors reading the device. Fixing that will fix
the issue.
If you want to stop the logging, I don't believe there is a method right
now. But i could be trivially added.
Allow user to disable mlog(ML_ERROR) logging.
On Thu, Oct 31, 2013 at 7:38 PM, Guozhonghua
debugfs.ocfs2 -R frag filespec DEVICE will show you the fragmentation
level on an inode basis. You could run that for all inodes and figure out
the value for the entire volume.
On Fri, Nov 1, 2013 at 3:00 PM, Andy ary...@allantgroup.com wrote:
How can I check the amount on fragmentation on
Cloning the inode means inode + data. Let it finish.
On Sat, Mar 22, 2014 at 3:44 PM, Eric Raskin eras...@paslists.com wrote:
Hi:
I am running a two-node Oracle VM Server 2.2.2 installation. We were
having some strange problems creating new virtual machines, so I shut down
the systems
inode?
On 03/22/2014 09:40 PM, Sunil Mushran wrote:
Cloning the inode means inode + data. Let it finish.
On Sat, Mar 22, 2014 at 3:44 PM, Eric Raskin eras...@paslists.com wrote:
Hi:
I am running a two-node Oracle VM Server 2.2.2 installation. We were
having some strange problems
What is the output of the commands? The protocol is supposed to do the
unlocking on its own. See what is it blocked on. It could be that the node
that has the lock cannot unlock it because it cannot flush the journal to
disk.
On Tue, Sep 9, 2014 at 7:55 PM, Guozhonghua guozhong...@h3c.com wrote:
https://www.activecollab.com/
On February 9, 2015 at 8:09:06 PM, Sunil Mushran (sunil.mush...@gmail.com)
wrote:
On node 2, do:
ps aux | grep o2hb
I suspect you have multiple o2hb threads running. If so, restart the o2cb
cluster on that node.
On Mon, Feb 9, 2015 at 10:08 AM, Danijel Krmar
This is because you are specifying a 128k cluster size. Refer to man
mkfs.ocfs2 for more.
On Mar 17, 2015 8:04 PM, Umarzuki Mochlis umarz...@gmail.com wrote:
Hi,
What I meant by total size is output of 'du -hs'
I can see output of fdisk on mpath1 of ocfs2 LUN similar to logical
volume of
901 - 943 of 943 matches
Mail list logo