你好:
I configured 2 resource, r0 and r1, device /dev/drbd0 can use,but
/dev/drbd1 is not used, The error message is
/sbin/drbdsetup /dev/drbd1 disk /dev/sda14 internal -1 --on-io-error=detach
ioctl(,SET_DISK_CONFIG,) failed: Device or resource busy
cat /etc/drbd.conf
resource r0 {
On 2011-02-16 11:41, Adam Kliment wrote:
Hello DRBD users,
I'd like to present our prototype of drbd gem - ruby wrapper over
drbdadm. Gem is available on rubygems [1] (gem install drbd) and source
is placed on GitHub [2]. Please give as some feedback on GitHub issue
tracker [3]. We will
Hello List,
I have one machine at the moment, and will receive another machine in a
month. The issue is, I would like DRBD setup on the machine so that when
the other one arrives I can connect them and have the second machine up
in no time.
How would you configure this, ie make DRBD ignore
[root@0-0-0:/] pvs --segments --units s -o
vg_name,vg_extent_size,pv_name,pvseg_start,pvseg_size,lv_name,lv_size,lv_attr,pe_start,seg_pe_ranges
VG Ext PV Start SSize LV LSize Attr 1st PE PE
Ranges
vm 8192S /dev/dm-3 0 25000 freeradius
IMHO this has nothing to do with drbd.
It is probably a problem with your LVM.
You may add r|/dev/dm-.*| to your lvm filter list.
But this should be equal as your current lvm filter
filter = [ a|drbd.*|, a|md.*|, r|.*| ]
ignores it already.
But you get /dev/dm-3 as PV for VG vm and dm-3 is the
Any solution to work with High-Availability between two firewall for linux ?
Using High-Availability between two firewall, it saved the conntrack
of one server for the other ?
reggards,
Cristiano - Brazil.
2011/2/16 Felix Frank f...@mpexnet.de:
Hi,
these tools work (together, more or less)
hi,
debian lenny,
pacemaker 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b,
drbd 8.3.10 5c0b046982443d4785d90a2c603378f9017b,
ocf ra 1.3 shipped with (self-compiled drbd debian package)
kernel 2.6.27.57+ipax
every couple of hours, i encounter a digest mismatch:
Digest mismatch, buffer
On Wed, Feb 16, 2011 at 11:41:18AM -0300, Cristiano Bosenbecker Hellwig wrote:
Any solution to work with High-Availability between two firewall for linux ?
Using High-Availability between two firewall, it saved the conntrack
of one server for the other ?
no offense, but
On Wed, Feb 16, 2011 at 03:49:34PM +0100, Raoul Bhatia [IPAX] wrote:
hi,
debian lenny,
pacemaker 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b,
drbd 8.3.10 5c0b046982443d4785d90a2c603378f9017b,
ocf ra 1.3 shipped with (self-compiled drbd debian package)
kernel 2.6.27.57+ipax
On Wed, Feb 16, 2011 at 11:44:50AM +, Alessandro Bono wrote:
Hi
I'm trying to compile drbd 8.3.10 from git repository against kernel
2.6.32-28-server from ubuntu lucid but I receive this error
usr/bin/make -C drbd KERNEL_SOURCES=/usr/src/linux-headers-2.6.32-28-
server
On Wed, 16 Feb 2011 17:35:51 +0100, Lars Ellenberg wrote:
Yeah right.
Some standard linux kernel header cannot be found, and that is obviously
DRBD's fault. Not.
The source tarball you us there is broken, or more likely some symlink
in your linux headers is broken.
building
I have several successful DRBD clusters in production, including two
RHEL 5.3 servers running drbd 8.3.2. They have been running fine for
more than a year. Today we saw very high iowait (99%) on the primary
node (possibly on the secondary too, but I neglected to look) and users
could not work. We
Corrections:
I said both servers are on the same GigE switch. Actually they are
connected back-to-back via a crossover cable.
I set the syncer rate to 110M not 333M.
--
Eric Robinson
Disclaimer - February 16, 2011
This email and any files transmitted with it are
While performing a resync, iostat shows that nodeA (primary) is doing
almost all reads while nodeB (secondary) is doing almost all writes,
which is expected. However, I'm trying to see the names of the processes
that are doing the reading and writing. I tried enabling
/proc/sys/vm/block_dump and
In file included from include/linux/notifier.h:13,
from include/linux/memory_hotplug.h:6,
from include/linux/mmzone.h:666,
from include/linux/gfp.h:4,
from include/linux/kmod.h:22,
from
15 matches
Mail list logo