hi all,
we use
two Supermicro Server 8GB RAM, 2xQuadcore Xeon 5x
3ware 9550 Raid 5 with 6*320GB Seagate,
Network for DRBD between the two systems is a direct 10 GB connection (Intel 82598EB 10GbE AF Network Adapter) only for DRBD Sync
SLES 10 SP2, 2.6.16.60-0.39.3-xen #1 SMP x86_64
DRDB 8.2.6
Raid 5 > phy. Disk > LVM > DBBD > LVM > XEN > VBD
one lv on top of DRBD as OCFS2 for the DomUs
then we set for performance
# blockdev --setra 16384 /dev/sda (only change)
# echo 512 > /sys//block/sda/queue/nr_requests
# echo deadline > /sys/block/sda/queue/scheduler
# echo 20 > /proc/sys/vm/dirty_background_ratio
# echo 60 > /proc/sys/vm/dirty_ratio
node 1 with 2 aktiv DomU´s
node 2 reboot
after reboot node 2, drbd was sync with node
# /etc/init.d/drbd status
0:drbd0 Connected Primary/Primary UpToDate/UpToDate C
but, the VG an the LV on top of drbd was vansihed
and on the node 1 the same??????????
then I was show the partitions on node 1
# df
/dev/sda2 / (mounted)
df: '/vm': Input/output error ???
but the DomU´s on node1 running and has mounted LV on top of drbd as xvd???
here my drbd.conf
.................................
global {
usage-count yes;
disable-ip-verification;
}
common {
syncer { rate 400M; al-extents 3389; }
}
resource drbd0 {
protocol C;
handlers {
split-brain "/usr/lib/drbd/notify-split.brain.sh root";
}
startup {
wfc-timeout 0;
degr-wfc-timeout 120;
become-primary-on both;
}
disk {
on-io-error detach;
max-bio-bvecs 1;
}
net {
allow-two-primaries;
max-buffers 8000;
max-epoch-size 8000;
sndbuf-size 512k;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}
on enterprise5 {
device /dev/drbd0;
disk /dev/vglocal/lvlocal;
address 192.168.1.15:7788;
flexible-meta-disk internal;
}
on enterprise6 {
device /dev/drbd0;
disk /dev/vglocal/lvlocal;
address 192.168.1.16:7788;
flexible-meta-disk internal;
what can I do ?
please help me.
Regards
Andreas