Hi folks,
I have introduced sync checks of my drbd volumes run by cron once
per month. Result: Most of my volumes are out of sync.
AFAIU the fix is to disconnect the secondary volume and connect it
again. drbd will sync the lost chunks of data automagically. The
output of cat /proc/drbd shows
Hi folks,
I ran a drbdadm verify on all my drbd clusters (2 nodes each). It
is still running, but by now the huge oos numbers look pretty
scary:
il06:~# ssh node24a cat /proc/drbd
version: 8.4.11 (api:1/proto:86-101)
srcversion: 32DFEF1F0DADCBF174877F7
1: cs:VerifyS ro:Secondary/Primary
I stumbled over this today, too. AFAIU linstor won't work without drbd9,
i.e. in case of emergency I have to either rely upon a 3rd-party kernel
module, or build the drbd9 module on my own, hoping the code still builds.
This is a pretty high risk for a production system.
Regards
Harri
Hi folks,
I would be highly interested in a explanation (and a possible workaround
or fix?) for this inconsistency, too. How comes that drbd doesn't ring all
alarm bells it has access to in such a situation? Is this considered as
"not important"?
Regards
Harri
Hi folks,
drbd in Primary/Secondary mode, using drbd 8.4.10 on Debian 10:
I had a problem with a RAID controller; all disks became unavailable.
Drbd took over giving the node access to the disks on the other node
via network. Amazing. I took down the containers running on the affec-
ted host,
Hi folks,
I am still on Debian Buster and kernel 4.19.146 (using drbd-utils 9.5.0-1),
but for running a backports kernel (5.8.10) I wonder if the upgrade of
drbd-utils to version 9.15 is optional?
Regards
Harri
___
Star us on GITHUB:
On 12/14/18 2:36 PM, Lars Ellenberg wrote:
On Fri, Dec 14, 2018 at 02:13:50PM +0100, Harald Dunkel wrote:
Hi Lars,
On 12/14/18 1:27 PM, Lars Ellenberg wrote:
There was nothing dirty (~ 7 MB; nothing worth to mention).
So nothing to sync.
But it takes some time to invalidate and shrink 20
Hi Lars,
On 12/14/18 1:27 PM, Lars Ellenberg wrote:
There was nothing dirty (~ 7 MB; nothing worth to mention).
So nothing to sync.
But it takes some time to invalidate and shrink 20 million dentries
and inodes and 13 million buffer heads and associated caches.
Also, cpu bound now makes more
On 12/14/18 9:32 AM, Harald Dunkel wrote:
Is sync broken for drbd?
PS: Writing to a (much smaller) local partition instead of a drbd
partition sync takes several seconds, and the following umount takes
just a few millisecs, as expected.
???
Regards
Harri
Hi folks,
On 12/13/18 11:49 PM, Igor Cicimov wrote:
On Fri, Dec 14, 2018 at 2:57 AM Lars Ellenberg mailto:lars.ellenb...@linbit.com>> wrote:
Unlikely to have anything to do with DRBD.
since you apparently can reproduce, monitor
grep -e Dirty -e Writeback /proc/meminfo
and
Hi folks,
using drbd umounting /data1 takes >50 seconds, even though the file
system (ext4, noatime, default) wasn't accessed for more than 2h.
umount ran with 100% CPU load.
# sync
# time umount /data1
real0m52.772s
user0m0.000s
sys 0m52.740s
This appears to be a pretty long
Hi Lars,
On 12/04/2015 05:04 PM, Lars Ellenberg wrote:
>
> You are not supposed to disable the resync controller,
> you are supposed to correctly use it.
>
> https://blogs.linbit.com/p/443/drbd-sync-rate-controller-2/
>
> ... you should:
>
> * set c-plan-ahead to 20 (default with 8.4),
>
12 matches
Mail list logo