Hi Bob,

> I 've created a cluster on a ~6TB multipath device, that only came out to 4TB.
> blockdev --getsz on the LV backing device shows  12883853312, while blockdev 
> --getsz on /dev/drbd0 only returns 8587575296.
> 
> Pretty straight vanilla config - just uses external metadata (512MB LV).
> 
> I couldn't find anything in the docs, but have seen old (years ago) 
> references to a 4TB disk limit when using external metadata.

[root@lvo-shr-store10 ~]# lvs  | grep replicate
  replicate_data VGDATA -wi-ao---- 32.00t
  replicate_meta VGDATA -wi-ao----  4.00g

[root@lvo-shr-store10 ~]# dmesg | grep -i drbd | grep Version
[   58.143926] drbd: initialized. Version: 8.4.11-1 (api:1/proto:86-101)

[root@lvo-shr-store10 ~]# blockdev --getsz /dev/VGDATA/replicate_data
68719476736

[root@lvo-shr-store10 ~]# blockdev --getsz /dev/drbd10
68719476736

[root@lvo-shr-store10 ~]# zpool get all zpool | grep -i size
zpool  size                           31.8T                          -

Could export full (drbd) block device to zpool ….

Useful part of the config :

  device                /dev/drbd_nfsdata minor 10;
  disk                  /dev/VGDATA/replicate_data;
  flexible-meta-disk    /dev/VGDATA/replicate_meta;

Used drbd84-utils-9.3.1-1.el7.elrepo.x86_64 to configure it all.

So don’t think 8.4.11 should be limited to 4Tb to you ! :)

bye,
Chris


Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to