[DRBD-user] drbd-9.0.9rc1

2017-08-04 Thread Philipp Reisner
Hi,

good news for the weekend, the next drbd-9 release will be ready
in one week.
Get your test equipment ready next week, and let us know if you 
find anything suspicious.

  This release will be very important for users that have diskless,
  primary nodes in their setups.


drbd
9.0.9rc1-1 (api:genl2/proto:86-112/transport:14)

 * fix occasionally forgotten resyncs in installations where
   diskless primaries are present. The bug tigers when a storage
   node is re-integrated, and it happens to connect to the diskless
   primary first; This bug is severe, since it might cause inconsistent
   data read back on the diskless primary!
 * fix a possible OOPS when in a debug message regarding bitmap
   locking
 * fix discard bigger than 1MiB; The bug causes disconnect with
   bigger discard requests
 * fix an issue that causes unexpected split-brain situations upon
   connect. This issue triggers only when one of the node has a
   node_id bigger than 3.
 * fix left over bits in bitmap on SyncSource after resync; the
   issue was triggered by write requests that come in while the
   resync starts
 * fix peers becoming unexpectedly displayed as D_OUTDATED at the
   end of a resync; While the disk state on the node stays D_UP_TO_DATE
 * fix a race between auto promote and auto demote of multiple volumes
   in a single resource; The symptom was that the a process opening
   the /dev/drbdX for read-write gets an -EROFS errno

http://git.drbd.org/drbd-9.0.git/tag/refs/tags/drbd-9.0.9rc1
http://www.linbit.com/downloads/drbd/9.0/drbd-9.0.9rc1-1.tar.gz

best regards,
 Phil
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Imported vm to kvm does not show interface

2017-08-04 Thread Veit Wahlich
Hi Dirk,

this is most likely caused by a NIC model not supported by the guest OS'
drivers, but this is also totally off-topic redarding this is the DRBD
ML. I suggest you consult the KVM/qemu/libvirt MLs on this topic
instead.

Beste Gruesze,
// Veit

Am Freitag, den 04.08.2017, 13:59 + schrieb Dirk Lehmann:
> Hello,
> 
> 
> I exported one of my virtual maschines from Oracle virtualbox to ova
> 2.0 and converted this file to KVM qcow2 like described for example in
> this tutorial:
> 
> 
> https://utappia.org/2016/04/20/how-to-migrate-virtual-box-machines-to-the-kvm-virtmanager/
> 
> 
> Unfortunatly this vm does not bring up eth0 when started by KVM.
> 
> 
> Any hint how to fix to get migrated to KVM with DRBD and pacemaker HA?
> 
> 
> Best regards,
> 
> 
> Dirk
> 
> 
> 
> ---
> 
> 
> Dirk Lehmann
> 
> Informatikkaufmann (IHK)
> 
> Groppstraße 11
> 
> 97688 Bad Kissingen
> 
> Telefon (0971) 121 922 56
> 
> Telefax (0971) 121 922 58
> 
> Webseite www.so-geht-es.org
> 
> 
> Diese Nachricht wurde gesendet mit Outlook for Android und ist nur für
> den Empfänger bestimmt.
> 
> 
> „LASS DEN KLICK IN DEINER STADT! Kauf da ein, wo Du auch lebst“ und
> besuche meinen Online Shop jetzt unter
> http://www.badkissingen.computer
> 
> 
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] Imported vm to kvm does not show interface

2017-08-04 Thread Dirk Lehmann
Hello,

I exported one of my virtual maschines from Oracle virtualbox to ova 2.0 and 
converted this file to KVM qcow2 like described for example in this tutorial:

https://utappia.org/2016/04/20/how-to-migrate-virtual-box-machines-to-the-kvm-virtmanager/

Unfortunatly this vm does not bring up eth0 when started by KVM.

Any hint how to fix to get migrated to KVM with DRBD and pacemaker HA?

Best regards,

Dirk

---

Dirk Lehmann
Informatikkaufmann (IHK)
Groppstraße 11
97688 Bad Kissingen
Telefon (0971) 121 922 56
Telefax (0971) 121 922 58
Webseite www.so-geht-es.org

Diese Nachricht wurde gesendet mit Outlook for Android und ist nur für den 
Empfänger bestimmt.

„LASS DEN KLICK IN DEINER STADT! Kauf da ein, wo Du auch lebst“ und besuche 
meinen Online Shop jetzt unter http://www.badkissingen.computer

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] DRBD performance

2017-08-04 Thread ArekW
I set up 2-node NFS cluster on drbd 8.4 across two sites connected with
10GB link. I run MQ benchmark with result of 100 units which is poor. When
I disable drbd synchronizations I get 200 units which is good. That's twice
faster. Is it normal for drbd to slow down that much ?
Thank you
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Out-of-sync woes

2017-08-04 Thread Roland Kammerer
On Fri, Aug 04, 2017 at 10:31:02AM +0200, Jan Schermer wrote:
> I think it’s more likely he’s hitting a number of bugs that are
> getting fixed in DRBD, where it would simply not resync data while
> appearing Consistent/UpToDate etc. 

No. This is a drbd8.4 setup and you are talking about drbd9.

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Out-of-sync woes

2017-08-04 Thread Jan Schermer
AFAIK this should not affect data integrity at rest (related to “verify-alg”) 
but only in-flight (csum-alg), and even then at most few blocks (that are 
in-flight) should be affected? (btw shouldn’t stable_pages_required be enabled?)

I think it’s more likely he’s hitting a number of bugs that are getting fixed 
in DRBD, where it would simply not resync data while appearing 
Consistent/UpToDate etc. I urge you to look at drbdsetup status --verbose 
—statistics $resource and look for out-of-sync counter >0.

We used cache=none with qemu and switched to cache=writeback with no corruption 
- you just need to take care only to have it primary on one node then (works 
with live migrations if you know what you’re doing though).

Jan


> On 4 Aug 2017, at 09:55, Veit Wahlich  wrote:
> 
> Hi Luke,
> 
> I assume you are experiencing the results of data inconsistency by
> in-flight writes. This means that a process (here your VM's qemu) can
> change a block that already waits to be written to disk.
> Whether this happens (undetected) or not depends on how the data is
> accessed for writing and synced to disk.
> 
> For qemu, you have to consider two factors; the guest OS' file systems'
> configuration and qemu's disk caching configuration:
> On Linux guests, this usually only happens for guests with file systems,
> that are NOT mounted either sync or with barriers, and with block-backed
> swap.
> On Windows guests it always happens.
> For qemu it depends on how the disk caching strategy is configured and
> thus whether it allows in-fight writes or not.
> 
> The common position is to configure qemu for writethrough caching for
> all disks and leave your guests' OS unchanged. You will also have to
> ignore/override libvirt's warning about unsafe migration with this cache
> setting, as it only applies to file-backed VM disks, not
> blockdev-backed.
> I use this for hundreds of both Linux and Windows VMs backed by DRBD
> block devices and have no inconsistency problems at all since this
> change.
> 
> Changing qemu's caching strategy might affect performance.
> For performance reasons you are advised to use a hardware RAID
> controller with battery-backed write-back cache.
> 
> For consistency reasons you are advised to use real hardware RAID, too,
> as the in-flight block changing problem described above might also
> affect mdraid, dmraid/FakeRAID, LVM mirroring, etc. (depending on
> configuration).
> 
> Best regards,
> // Veit
> 
> 
> Am Freitag, den 04.08.2017, 11:11 +1200 schrieb Luke Pascoe:
>> Hello everyone.
>> 
>> I have a fairly simple 2-node CentOS 7 setup running KVM virtual
>> machines, with DRBD 8.4.9 between them.
>> 
>> There is one DRBD resource per VM, with at least 1 volume each,
>> totalling 47 volumes.
>> 
>> There's no clustering or heartbeat or other complexity. DRBD has it's
>> own Gig-E interface to sync over.
>> 
>> I recently migrated a host between nodes and it crashed. During
>> diagnostics I did a verification on the drbd volume for the host and
>> found that it had _a lot_ of out of sync blocks.
>> 
>> This led me to run a verification on all volumes, and while I didn't
>> find any other volumes with large numbers of out of sync blocks, there
>> were several with a few. I have disconnected and reconnected all these
>> volumes, to force them to resync.
>> 
>> I have now set up a nightly cron which will verify as many volumes as
>> it can in a 2 hour window, this means I get through the whole lot in
>> about a week.
>> 
>> Almost every night, it reports at least 1 volume which is out-of-sync,
>> and I'm trying to understand why that would be.
>> 
>> I did some research and the only likely candidate I could find was
>> related to TCP checksum offloading on the NICs, which I have now
>> disabled, but it has made no difference.
>> 
>> Any suggestions what might be going on here?
>> 
>> Thanks.
>> 
>> Luke Pascoe
>> ___
>> drbd-user mailing list
>> drbd-user@lists.linbit.com
>> http://lists.linbit.com/mailman/listinfo/drbd-user
> 
> 
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Out-of-sync woes

2017-08-04 Thread Veit Wahlich
Hi Luke,

I assume you are experiencing the results of data inconsistency by
in-flight writes. This means that a process (here your VM's qemu) can
change a block that already waits to be written to disk.
Whether this happens (undetected) or not depends on how the data is
accessed for writing and synced to disk.

For qemu, you have to consider two factors; the guest OS' file systems'
configuration and qemu's disk caching configuration:
On Linux guests, this usually only happens for guests with file systems,
that are NOT mounted either sync or with barriers, and with block-backed
swap.
On Windows guests it always happens.
For qemu it depends on how the disk caching strategy is configured and
thus whether it allows in-fight writes or not.

The common position is to configure qemu for writethrough caching for
all disks and leave your guests' OS unchanged. You will also have to
ignore/override libvirt's warning about unsafe migration with this cache
setting, as it only applies to file-backed VM disks, not
blockdev-backed.
I use this for hundreds of both Linux and Windows VMs backed by DRBD
block devices and have no inconsistency problems at all since this
change.

Changing qemu's caching strategy might affect performance.
For performance reasons you are advised to use a hardware RAID
controller with battery-backed write-back cache.

For consistency reasons you are advised to use real hardware RAID, too,
as the in-flight block changing problem described above might also
affect mdraid, dmraid/FakeRAID, LVM mirroring, etc. (depending on
configuration).

Best regards,
// Veit


Am Freitag, den 04.08.2017, 11:11 +1200 schrieb Luke Pascoe:
> Hello everyone.
> 
> I have a fairly simple 2-node CentOS 7 setup running KVM virtual
> machines, with DRBD 8.4.9 between them.
> 
> There is one DRBD resource per VM, with at least 1 volume each,
> totalling 47 volumes.
> 
> There's no clustering or heartbeat or other complexity. DRBD has it's
> own Gig-E interface to sync over.
> 
> I recently migrated a host between nodes and it crashed. During
> diagnostics I did a verification on the drbd volume for the host and
> found that it had _a lot_ of out of sync blocks.
> 
> This led me to run a verification on all volumes, and while I didn't
> find any other volumes with large numbers of out of sync blocks, there
> were several with a few. I have disconnected and reconnected all these
> volumes, to force them to resync.
> 
> I have now set up a nightly cron which will verify as many volumes as
> it can in a 2 hour window, this means I get through the whole lot in
> about a week.
> 
> Almost every night, it reports at least 1 volume which is out-of-sync,
> and I'm trying to understand why that would be.
> 
> I did some research and the only likely candidate I could find was
> related to TCP checksum offloading on the NICs, which I have now
> disabled, but it has made no difference.
> 
> Any suggestions what might be going on here?
> 
> Thanks.
> 
> Luke Pascoe
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd 8.4.10-1 doesn't compile on RHEL 7.4 beta

2017-08-04 Thread Trevor Hemsley
On 03/08/17 17:38, Simon Ironside wrote:
> On 07/07/17 14:41, Lafaille Christophe wrote:
>
>> Today, I can't compile drbd 8.4.10-1 on my RHEL 7.4 Beta platform, an
>> error is returned...
>
> Same happens on RHEL 7.4 GA, released a couple of days ago, with their
> stock kernel package version 3.10.0-693.

I'm told that this affects 9.0.8 as well. Probably expected but...

Trevor
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] DRBD Won't Compile in RHEL7.4

2017-08-04 Thread Jay Smith - NOAA Federal
I upgraded one of my servers to RHEL7.4 yesterday. Because the kernel
changed, I needed to recompile DRBD. This failed.

After the first failure, I deleted my /usr/src/drbd-8.4 directory and
followed the "git download" instructions in the DRBD User's Guide to obtain
a fresh copy of drbd-8.4.10-1. Then, as user root in the new
/usr/src/drbd-8.4 directory, I ran the following command:
make clean all 2>&1 | tee make.log
which failed.

Not knowing how much has changed between DRBD 8.x and 9.x, I checked out a
clean copy of DRBD 9.0.8 and tried to compile it. Unfortunately, that
failed exactly the same way as DRBD 8.x compile effort did.

Some system info:
The output of "cat /etc/redhat-release" is: Red Hat Enterprise Linux Server
release 7.4 (Maipo)
The output of "uname -srvmpio" is: Linux 3.10.0-693.el7.x86_64 #1 SMP Thu
Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
The version of gcc is: 4.8.5-16.el7
The version of the kernel is: 3.10.0-693.el7

I am able to use DRBD on this newly upgraded server by booting with the
previous kernel (version 3.10.0-514.26.2.el7), but this is not a viable
long-term solution.

The log file I created when attempting to compile, make.log, is attached.

If there is any other information you need from me, please email. Thanks in
advance for any help you can provide.

-- 
Jay Smith
Information Technology Officer
National Weather Service Fairbanks, AK
make[1]: Entering directory `/usr/src/drbd-8.4/drbd'
rm -rf .tmp_versions Module.markers Module.symvers modules.order
rm -f *.[oas] *.ko .*.cmd .*.d .*.tmp *.mod.c .*.flags .depend .kernel*
rm -f compat/*.[oas] compat/.*.cmd
rm -f .compat_test.*
make[1]: Leaving directory `/usr/src/drbd-8.4/drbd'
rm -f *~
===
  With DRBD module version 8.4.5, we split out the management tools
  into their own repository at http://git.linbit.com/drbd-utils.git
  (tarball at http://oss.linbit.com/drbd)

  That started out as "drbd-utils version 8.9.0",
  and provides compatible drbdadm, drbdsetup and drbdmeta tools
  for DRBD module versions 8.3, 8.4 and 9 (prereleases as of now).

  Again: to manage DRBD 8.4.5 kernel modules and above,
  you want drbd-utils >= 8.9.0 from above url.
===
make[1]: Entering directory `/usr/src/drbd-8.4/drbd'

Calling toplevel makefile of kernel source tree, which I believe is in
KDIR=/lib/modules/3.10.0-693.el7.x86_64/build

make -C /lib/modules/3.10.0-693.el7.x86_64/build   SUBDIRS=/usr/src/drbd-8.4/drbd  modules
  COMPAT  have_pointer_backing_dev_info
  COMPAT  use_blk_queue_max_sectors_anyways
  COMPAT  have_inode_lock
  COMPAT  have_kvfree
  COMPAT  have_cpumask_empty
  COMPAT  drbd_release_returns_void
  COMPAT  have_blk_queue_split
  COMPAT  have_genl_register_family_with_ops
  COMPAT  have_blk_queue_merge_bvec
  COMPAT  have_dst_groups
  COMPAT  have_f_path_dentry
  COMPAT  blkdev_issue_zeroout_blkdev_ifl_wait
  COMPAT  have_ctrl_attr_mcast_groups
  COMPAT  have_vzalloc
  COMPAT  have_blk_set_stacking_limits
  COMPAT  have_clear_bit_unlock
  COMPAT  have_prandom_u32
  COMPAT  hlist_for_each_entry_has_three_parameters
  COMPAT  have_proc_pde_data
  COMPAT  have_fmode_t
  COMPAT  have_struct_queue_limits
  COMPAT  have_bd_claim_by_disk
  COMPAT  have_linux_byteorder_swabb_h
  COMPAT  queue_limits_has_discard_granularity
  COMPAT  have_idr_alloc
  COMPAT  have_umh_wait_proc
  COMPAT  queue_limits_has_discard_zeroes_data
  COMPAT  have_SHASH_DESC_ON_STACK
  COMPAT  have_nlmsg_hdr
  COMPAT  have_struct_bvec_iter
  COMPAT  have_blk_queue_max_hw_sectors
  COMPAT  have_blk_plug_cb_data
  COMPAT  sock_create_kern_has_five_parameters
  COMPAT  have_AHASH_REQUEST_ON_STACK
  COMPAT  have_blk_qc_t_make_request
  COMPAT  have_bd_unlink_disk_holder
  COMPAT  have_nla_put_64bit
  COMPAT  init_work_has_three_arguments
  COMPAT  bio_split_has_bio_split_pool_parameter
  COMPAT  have_bio_set_op_attrs
  COMPAT  have_IS_ERR_OR_NULL
  COMPAT  have_genlmsg_put_reply
  COMPAT  have_WB_congested_enum
  COMPAT  have_signed_nla_put
  COMPAT  bioset_create_has_three_parameters
  COMPAT  have_is_vmalloc_addr
  COMPAT  have_proc_create_data
  COMPAT  have_open_bdev_exclusive
  COMPAT  kmap_atomic_page_only
  COMPAT  have_list_splice_tail_init
  COMPAT  have_bio_bi_error
  COMPAT  have_shash_desc_zero
  COMPAT  have_blk_queue_write_cache
  COMPAT  have_task_pid_nr
  COMPAT  have_genlmsg_reply
  COMPAT  have_genlmsg_msg_size
  COMPAT  have_blkdev_get_by_path
  COMPAT  have_bool_type
  COMPAT  have_bio_bi_destructor
  COMPAT  have_genl_id_generate
  COMPAT  need_d_inode
  COMPAT  have_sock_shutdown
  COMPAT  have_idr_for_each_entry
  COMPAT  have_blk_queue_max_segments
  COMPAT  have_refcount_inc
  COMPAT  have_void_make_request
  COMPAT  have_genl_family_in_genlmsg_multicast
  COMPAT  have_genl_lock
  COMPAT  have_cn_netlink_skb_parms
  COMPAT  have_atomic_in_flight
  COMPAT  

[DRBD-user] Out-of-sync woes

2017-08-04 Thread Luke Pascoe
Hello everyone.

I have a fairly simple 2-node CentOS 7 setup running KVM virtual
machines, with DRBD 8.4.9 between them.

There is one DRBD resource per VM, with at least 1 volume each,
totalling 47 volumes.

There's no clustering or heartbeat or other complexity. DRBD has it's
own Gig-E interface to sync over.

I recently migrated a host between nodes and it crashed. During
diagnostics I did a verification on the drbd volume for the host and
found that it had _a lot_ of out of sync blocks.

This led me to run a verification on all volumes, and while I didn't
find any other volumes with large numbers of out of sync blocks, there
were several with a few. I have disconnected and reconnected all these
volumes, to force them to resync.

I have now set up a nightly cron which will verify as many volumes as
it can in a 2 hour window, this means I get through the whole lot in
about a week.

Almost every night, it reports at least 1 volume which is out-of-sync,
and I'm trying to understand why that would be.

I did some research and the only likely candidate I could find was
related to TCP checksum offloading on the NICs, which I have now
disabled, but it has made no difference.

Any suggestions what might be going on here?

Thanks.

Luke Pascoe
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] Out-of-sync woes

2017-08-04 Thread Luke Pascoe
Hello everyone.

I have a fairly simple 2-node CentOS 7 setup running KVM virtual machines,
with DRBD 8.4.9 between them.

There is one DRBD resource per VM, with at least 1 volume each, totalling
47 volumes.

There's no clustering or heartbeat or other complexity. DRBD has it's own
Gig-E interface to sync over.

I recently migrated a host between nodes and it crashed. During diagnostics
I did a verification on the drbd volume for the host and found that it had
_a lot_ of out of sync blocks.

This led me to run a verification on all volumes, and while I didn't find
any other volumes with large numbers of out of sync blocks, there were
several with a few. I have disconnected and reconnected all these volumes,
to force them to resync.

I have now set up a nightly cron which will verify as many volumes as it
can in a 2 hour window, this means I get through the whole lot in about a
week.

Almost every night, it reports at least 1 volume which is out-of-sync, and
I'm trying to understand why that would be.

I did some research and the only likely candidate I could find was related
to TCP checksum offloading on the NICs, which I have now disabled, but it
has made no difference.

Any suggestions what might be going on here?

Thanks.

Luke Pascoe



*E* l...@osnz.co.nz
* P* +64 (9) 296 2961
* M* +64 (27) 426 6649
* W* www.osnz.co.nz

24 Wellington St
Papakura
Auckland, 2110
New Zealand
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] Out-of-sync woes

2017-08-04 Thread Luke Pascoe
Hello everyone.

I have a fairly simple 2-node CentOS 7 setup running KVM virtual machines,
with DRBD 8.4.9 between them.

There is one DRBD resource per VM, with at least 1 volume each, totalling
47 volumes.

There's no clustering or heartbeat or other complexity. DRBD has it's own
Gig-E interface to sync over.

I recently migrated a host between nodes and it crashed. During diagnostics
I did a verification on the drbd volume for the host and found that it had
_a lot_ of out of sync blocks.

This led me to run a verification on all volumes, and while I didn't find
any other volumes with large numbers of out of sync blocks, there were
several with a few. I have disconnected and reconnected all these volumes,
to force them to resync.

I have now set up a nightly cron which will verify as many volumes as it
can in a 2 hour window, this means I get through the whole lot in about a
week.

Almost every night, it reports at least 1 volume which is out-of-sync, and
I'm trying to understand why that would be.

I did some research and the only likely candidate I could find was related
to TCP checksum offloading on the NICs, which I have now disabled, but it
has made no difference.

Any suggestions what might be going on here?

Thanks.

Luke Pascoe



*E* l...@osnz.co.nz
* P* +64 (9) 296 2961
* M* +64 (27) 426 6649
* W* www.osnz.co.nz

24 Wellington St
Papakura
Auckland, 2110
New Zealand
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd 8.4.10-1 doesn't compile on RHEL 7.4 beta

2017-08-04 Thread Roland Kammerer
On Fri, Jul 07, 2017 at 03:41:32PM +0200, Lafaille Christophe wrote:
>Hi,
> 
>I've downloaded RHEL7.4 Beta in order to prepare my future platform for
>drbd+corosync+pacemaker.
> 
>Today, I can't compile drbd 8.4.10-1 on my RHEL 7.4 Beta platform, an
>error is returned...
> 
>Now, I don't know if it's a problem on DRBD side or RedHat side... could
>you help me ?

That is our "fault" by definition. For in kernel DRBD it is easy, it
uses the data structures the kernel provides at that time and moves
along.

We want the out-of-tree code to be as close as to what current upstream
Linux looks like, which helps us merging back oot code to upstream.

The oot code supports a lot of kernel versions (think auf all the
different kernel versions all the distributions have) and does that with
a compat layer to detect what features your kernel has and for which it
has to provide compat code.

This is what failed and usually we are quick in updating the compat
code. It is especially fun when old kernels backport things from new
kernels.

So yes, that will be fixed, currently I can't give you an exact date
when this will happen in this case. "Soon".

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user