Sorry for missing that.
signed-off-by: xiaowei...@oracle.com
On Fri, 24 Jan 2014 14:11:09 +0800 xiaowei...@oracle.com wrote:
From: Xiaowei.Hu xiaowei...@oracle.com
suppress log message like this:
(open_delete,8328,0):ocfs2_unlink:951 ERROR: status = -2
Orabug:17445485
---
From: Xiaowei.Hu xiaowei...@oracle.com
suppress log message like this:
(open_delete,8328,0):ocfs2_unlink:951 ERROR: status = -2
Orabug:17445485
---
fs/ocfs2/namei.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
index 0ba1cf0..ca6c5ba
From: Xiaowei.Hu xiaowei...@oracle.com
Suppress the error message from being printed in ocfs2_rename
Did same thing with Goldwyn Rodrigues last patch.
While removing a non-empty directory, the kernel dumps a message:
(mv,29521,1):ocfs2_rename:1474 ERROR: status = -39
Signed-off-by: Xiaowei Hu
From: Xiaowei.Hu xiaowei...@oracle.com
ocfs2_block_group_alloc_discontig() disables chain relink by setting
ac-ac_allow_chain_relink = 0 because it grabs clusters from multiple
cluster groups. It doesn't keep the credits for all chain relink,but
ocfs2_claim_suballoc_bits overrides this in this
Reproduce steps as below:
1.minimal install OL5.8 x86_64
2.install 2.6.39-300.17.1.el5uek.bug14842737
3.cofigure ocfs2
[root@ca-ostest284 ~]# cat /etc/ocfs2/cluster.conf
cluster:
node_count = 1
name = ocfs2
node:
ip_port =
ip_address = 139.185.50.68
From: Xiaowei.Hu xiaowei...@oracle.com
---
fs/ocfs2/cluster/heartbeat.c |4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index 61561c6..94193ac 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++
From: Xiaowei.Hu xiaowei...@oracle.com
Pid: 4508, comm:
mkfs.ocfs2 Not tainted 2.6.39-300.17.1.el5uek.bug14842737
#1 Dell Inc. PowerEdge 1950/0M788G RIP:
0010:[81098bff] [81098bff]
exit_creds+0x1f/0xb0 RSP: 0018:880222b4dd58 EFLAGS:
00010292 RAX: RBX:
From: Xiaowei.Hu xiaowei...@oracle.com
when the master requested locks ,but one/some of the live nodes died,
after it received the request msg and before send out the locks packages,
the recovery will fall into endless loop,waiting for the status changed to
finalize
NodeA
From: Xiaowei.Hu xiaowei...@oracle.com
when the master requested locks ,but one/some of the live nodes died,
after it received the request msg and before send out the locks packages,
the recovery will fall into endless loop,waiting for the status changed to
finalize
NodeA
From: Xiaowei.Hu xiaowei...@oracle.com
NodeA NodeB
ocfs2_cluster_lock on a new lockres M
spin_lock_irqsave(lockres-l_lock, flags);
gen = lockres_set_pending(lockres);
lockres-l_action = OCFS2_AST_ATTACH;
lockres_or_flags(lockres, OCFS2_LOCK_BUSY);
I am trying to fix bug13611997,CT's machine run into BUG in ocfs2dc thread,
BUG_ON(lockres-l_action != OCFS2_AST_CONVERT lockres-l_action !=
OCFS2_AST_DOWNCONVERT); I analysized the vmcore , the lockres-l_action =
OCFS2_AST_ATTACH and l_flags=326(which means
From: Xiaowei.Hu xiaowei...@oracle.com
With indexed_dir enabled, ocfs2 maintains a list of dirblocks having
space.
The credit calculation in ocfs2_link_credits() did not correctly account
for adding an entry that exactly fills a dirblock that triggers removing
that dirblock by changing the
From: Xiaowei.Hu xiaowei...@oracle.com
With indexed_dir enabled, ocfs2 maintains a list of dirblocks having
space.
The credit calculation in ocfs2_link_credits() did not correctly account
for adding an entry that exactly fills a dirblock that triggers removing
that dirblock by changing the
From: Xiaowei.Hu xiaowei...@oracle.com
In function dlm_migrate_lockres, it calls dlm_add_migration_mle() before
the dlm_mark_lockres_migrating() ,this dlm_mark_lockres_migrating()
should make sure the lockres is not dirty ,if it is dirty wait until it
becomes undirty, and then mark this lockres
From: XiaoweiHu xiaowei...@oracle.com
In function dlm_migrate_lockres, it calls dlm_add_migration_mle() before
the dlm_add_migration_mle() ,this dlm_add_migration_mle() should make sure
the lockres is not dirty ,if it is dirty wait until it becomes undirty,
and then mark this lockres as
sorry for some mistakes in this message, please use this description:
In function dlm_migrate_lockres, it calls dlm_add_migration_mle() before
the dlm_mark_lockres_migrating() ,this dlm_mark_lockres_migrating()
should make sure the lockres is not dirty ,if it is dirty wait until it
becomes
16 matches
Mail list logo