Hi,
- Oorspronkelijk bericht -
Van: Andreas Bauer a...@voltage.de
Aan: drbd-user@lists.linbit.com
Verzonden: Dinsdag 6 maart 2012 00:27:33
Onderwerp: Re: [DRBD-user] Kernel hung on DRBD / MD RAID
From: Florian Haas flor...@hastexo.com
Sent: Mon 05-03-2012 23:59
On Mon, Mar 5
Hi Andreas,
From: Lars Ellenberg lars.ellenberg [at] linbit
Ok, corrected to 8.3.11:
Kernel 3.1.0 / DRBD 8.3.11
Nothing directly DRBD related in the stack traces.
Yes, makes sense for the md_resync, but what for the KVM process? It does
access its device via DRBD,
so is the
From: Micha Kersloot mi...@kovoks.nl
Sent: Mon 05-03-2012 11:31
Hi Andreas,
Looks like MD is stepping on it's own toes there.
Stepping on one's own toes is something one isn't supposed to do, right? ;-)
I might raise the issue with the MD developpers but at the moment I am
From: Micha Kersloot mi...@kovoks.nl
Sent: Mon 05-03-2012 16:36
No I haven't gotten any further and do not have equipment for testing
at the moment, but put my effort into carefully scheduling all
cron-jobs / backups so that there are no RAID resync/verify runs
when heavy I/O can be
On Mon, Mar 5, 2012 at 11:45 PM, Andreas Bauer a...@voltage.de wrote:
I can share an observation:
(Disclaimer: my knowledge of the Linux I/O stack is very limited)
Kernel 3.1.0, DRBD 8.3.11, DRBD-LVM-MD-RAID1-SATA DISKS
(Disks use CFQ scheduler)
issue command: drbdadm verify all
(with
From: Florian Haas flor...@hastexo.com
Sent: Mon 05-03-2012 23:59
On Mon, Mar 5, 2012 at 11:45 PM, Andreas Bauer a...@voltage.de wrote:
I can share an observation:
(Disclaimer: my knowledge of the Linux I/O stack is very limited)
Kernel 3.1.0, DRBD 8.3.11, DRBD-LVM-MD-RAID1-SATA
Hi,
On 02/21/2012 12:03 AM, Andreas Bauer wrote:
So when vm-master is Primary, vm-slave is Secondary, and I force-detach the
backing device on vm-master, DRBD will automatically make vm-slave the
Primary and direct writes to that host?
no.
The secondary remains secondary. However, the
From: Felix Frank f...@mpexnet.de
Sent: Tue 21-02-2012 10:05
On 02/21/2012 12:03 AM, Andreas Bauer wrote:
So when vm-master is Primary, vm-slave is Secondary, and I force-detach the
backing device on vm-master, DRBD will automatically make vm-slave the
Primary
and direct writes to
On 02/21/2012 12:03 AM, Andreas Bauer wrote:
So when vm-master is Primary, vm-slave is Secondary, and I
force-detach the
backing device on vm-master, DRBD will automatically make vm-slave the
Primary and direct writes to that host?
no.
The secondary remains secondary. However, the
From: Lars Ellenberg lars.ellenb...@linbit.com
Sent: Fri 17-02-2012 11:52
On Tue, Feb 14, 2012 at 12:21:07PM +0100, Andreas Bauer wrote:
Yes, makes sense for the md_resync, but what for the KVM process? It does
access its device via DRBD, so is the stacktrace incomplete? (missing DRBD
From: Andreas Bauer a...@voltage.de
Sent: Mon 20-02-2012 22:18
From: Lars Ellenberg lars.ellenb...@linbit.com
Sent: Fri 17-02-2012 11:52
Do you have an other kvm placed directly on the MD?
Are you sure your kvm's are not bypassing DRBD?
The lvm volumes are definately all placed on
On Mon, Feb 20, 2012 at 10:16:50PM +0100, Andreas Bauer wrote:
From: Lars Ellenberg lars.ellenb...@linbit.com
Sent: Fri 17-02-2012 11:52
On Tue, Feb 14, 2012 at 12:21:07PM +0100, Andreas Bauer wrote:
Yes, makes sense for the md_resync, but what for the KVM process? It does
access its
From: Lars Ellenberg lars.ellenb...@linbit.com
Sent: Mon 20-02-2012 23:14
On Mon, Feb 20, 2012 at 10:16:50PM +0100, Andreas Bauer wrote:
The underlying device should never get stuck in the first place, so it
would be sufficient to handle it manually when it happens. But when I
On Tue, Feb 14, 2012 at 12:21:07PM +0100, Andreas Bauer wrote:
From: Lars Ellenberg lars.ellenb...@linbit.com
Ok, corrected to 8.3.11:
Kernel 3.1.0 / DRBD 8.3.11
Nothing directly DRBD related in the stack traces.
Yes, makes sense for the md_resync, but what for the KVM process? It
On Mon, Feb 13, 2012 at 11:13:48PM +0100, Andreas Bauer wrote:
Can you describe to us the configuration.. were is the Software RAID1? if
you
can send the mount points.. What filesystem?
The RAID:
root@vm-master:~# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1
From: Lars Ellenberg lars.ellenb...@linbit.com
Ok, corrected to 8.3.11:
Kernel 3.1.0 / DRBD 8.3.11
Nothing directly DRBD related in the stack traces.
Yes, makes sense for the md_resync, but what for the KVM process? It does
access its device via DRBD, so is the stacktrace incomplete?
May be a error in drdb you have many sync in drdb :) As I read see that you
make this configuration.
http://www.drbd.org/users-guide-8.3/s-nested-lvm.html
Did you make any configuration changes? or kernel changes?...
the fail comes from kvm did you make any upgrade recently??
On Mon, Feb 13,
From: Eduardo Diaz - Gmail ediaz...@gmail.com
May be a error in drdb you have many sync in drdb :) As I read see that you
make this configuration.
http://www.drbd.org/users-guide-8.3/s-nested-lvm.html
Did you make any configuration changes? or kernel changes?...
the fail comes from
From: Andreas Bauer a...@voltage.de
May be a error in drdb you have many sync in drdb :) As I read see that you
make this configuration.
http://www.drbd.org/users-guide-8.3/s-nested-lvm.html
Did you make any configuration changes? or kernel changes?...
the fail comes from
Can you describe to us the configuration.. were is the Software RAID1? if you
can send the mount points.. What filesystem?
The RAID:
root@vm-master:~# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
976759672 blocks super 1.2 [2/2] [UU]
md1 : active
Hello,
this morning I had a nasty situation on a Primary/Secondary DRBD cluster. First
the setup:
Kernel 3.1.0 / DRBD 8.4.11
KVM virtual machines, running on top of
DRBD (protocol A, sndbuf-size 10M, data-integrity-alg md5, external meta data,
meta data on different physical disk), running on
21 matches
Mail list logo