This is not a coherent shared disk environment. The iscsi target
has no idea that the device is also being updated from another
source.

On 03/02/2011 04:30 PM, Nikola Savic wrote:

  Hello,

  I have 3 node setup using CentOS 5.5 and OCFS2 1.4. Disks from two nodes 
(node1 and node2) are connected using DRBD (primary/primary mode) and this 
shared storage should be accessable from all three nodes. Because of that, 
node1 has the iSCSI target setup on top the /dev/drbd1 device. I was also able 
to format the shared storage using OCFS2.

  When I mount OCFS2 disk on node1 and node2 using the /dev/drbd1 device, they 
work well. After mounting it on node3 using the imported iSCSI /dev/sda device, 
system gets into trouble. It looks like node3 can't join cluster and it's 
heathbeating in slot taken by node1 or node2.

  Similarly, when I import iSCSI target from node1 on node2, then node2 and 
node3 work well, while mounting /dev/drbd1 on node1 makes hearthbeating 
problem. When that happens, on node2 and node3 I get following line in 
/var/log/messages:
/ node3 kernel: ocfs2_dlm: Nodes in domain 
("72F96BF71A3A41B6BD5540CE773A9E9D"): 2 3/
while node1 reports:
/ node1 kernel: ocfs2_dlm: Nodes in domain 
("72F96BF71A3A41B6BD5540CE773A9E9D"): 1
...
 node1 kernel: (o2hb-72F96BF71A,2456,0):o2hb_do_disk_heartbeat:777 ERROR: Device 
"drbd1": another node is heartbeating in our slot!/

  I tried to setup iSCSI initiator on node1, so it can connect to own iSCSI 
target, but it doesn't work.

  Does anyone know why is this happening and how can I solve the problem? I 
think everything worked well when I used GNBD to export DRBD device to third 
node, but I would like to do it without RHCS.

Cluster configuration file (/etc/ocfs2/cluster.conf):
cluster:
        node_count = 3
        name = testcluster

node:
        ip_port = 7777
        ip_address = 192.168.100.2
        number = 1
        name = node1
        cluster = testcluster

node:
        ip_port = 7777
        ip_address = 192.168.100.3
        number = 2
        name = node2
        cluster = testcluster

node:
        ip_port = 7777
        ip_address = 192.168.100.4
        number = 3
        name = node3
        cluster = testcluster

  Thanks,
  Nikola


_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to