Hi everyone;

 

The problem below changes into a "o2net_connect_expired" error. After examinig 
logs and other internet search staff we try o2cb configure again with the 
command

/etc/init.d/o2cb configure

 

Our scenario is that

 

After the installation of RAC the below (also above) problem occurs and after 
reconfiguring o2cb our problem node can mount san disks. What i want to ask is 
after a RAC installation is this something that we should do or what could be 
the relation between this problem and installation of RAC?

Only one node has this problem by the way, others could mount disks and if 
others mount disks problem node con not but if problem node mount disks the 
others can not?

 

What do you think?

 

Mehmet Can ÖNAL

________________________________

From: Mehmet Can ÖNAL 
Sent: Thursday, September 18, 2008 1:17 PM
To: [email protected]
Subject: o2hb_do_disk_heartbeat:982:ERROR

 

Hi everyone;

 

I have a problem on my 10 nodes cluster with ocfs2 1.2.9 and the OS is RHEL 4.7 
AS.

 

9 nodes can start o2cb service and mount san disks on startup however one node 
can not do that. My cluster configuration is :

 

 

node:

        ip_port = 7777

        ip_address = 192.168.5.1

        number = 0

        name = fa01

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.2

        number = 1

        name = fa02

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.3

        number = 2

        name = fa03

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.4

        number = 3

        name = fa04

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.5

        number = 4

        name = fa05

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.6

        number = 5

        name = fa06

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.7

        number = 6

        name = fa07

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.8

        number = 7

        name = fa08

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.10

        number = 8

        name = fa10

        cluster = ocfs2

 

node:

        ip_port = 7777

        ip_address = 192.168.5.9

        number = 9

        name = fa09

        cluster = ocfs2

 

cluster:

        node_count = 10

        name = ocfs2

 

when i manually try to mount disks i get an error says:

"mount.ocfs2: Transport endpoint is not connected while mounting 
/dev/emcpowerc1 on /oradisk/conf. Check 'dmesg' for more information on this 
error."

 

And when i check dmesg i see

 

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

(21125,1):o2hb_do_disk_heartbeat:982 ERROR: Device "emcpowerc1": another node 
is heartbeating in our slot!

 

I search similar problems but i can't find anything could u help me?

 

Thanx

 

Mehmet Can ONAL
_______________________________________________
Ocfs2-users mailing list
[email protected]
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to