Il 15-09-2017 17:12 David Bruzos ha scritto:
Hi Danti,
The behavior you are experiencing is normal. As I pointed out
previously, I've been using this setup for many years and I've seen
the same thing. You will encounter that when the filesystem is
written on the DRBD device without the use
Hi Danti,
The behavior you are experiencing is normal. As I pointed out previously,
I've been using this setup for many years and I've seen the same thing. You
will encounter that when the filesystem is written on the DRBD device without
the use of a partition table.
As a side note,
Il 06-09-2017 16:22 David Bruzos ha scritto:
I've used DRBD devices on top of ZFS zvols for years now and have been
very satisfied with the performance and possibilities that that
configuration allows for. I use DRBD 8.x on ZFS latest mainly on Xen
hypervisors running a mix a Linux and Windows
Il 06-09-2017 16:03 Yannis Milios ha scritto:
...I mean by cloning it first, since snapshot does not appear as
blockdev to the system but the clone does.
Hi, this is incorrect: ZVOL snapshots surely can appear as regular block
devices. You simply need to set the "snapdev=visible" property.
I've used DRBD devices on top of ZFS zvols for years now and have been very
satisfied with the performance and possibilities that that configuration allows
for. I use DRBD 8.x on ZFS latest mainly on Xen hypervisors running a mix a
Linux and Windows VMs with both SSD and mechanical drives.
...I mean by cloning it first, since snapshot does not appear as blockdev
to the system but the clone does.
On Wed, Sep 6, 2017 at 2:58 PM, Yannis Milios
wrote:
> Even in that case I would prefer to assemble a new DRBD device ontop of
> the ZVOL snapshot and then mount
Even in that case I would prefer to assemble a new DRBD device ontop of the
ZVOL snapshot and then mount the DRBD device instead :)
On Wed, Sep 6, 2017 at 2:56 PM, Gionatan Danti wrote:
> On 06/09/2017 15:31, Yannis Milios wrote:
>
>> If your topology is like the following:
On 06/09/2017 15:31, Yannis Milios wrote:
If your topology is like the following: HDD -> ZFS (ZVOL) -> DRBD ->
XFS then I believe it should make sense to always mount at the DRBD
level and not at the ZVOL level which happens to be the underlying
blockdev for DRBD.
Sure! Directly mounting the
Hi,
On 06/09/2017 13:28, Jan Schermer wrote:
Not sure you can mount snapshot (I always create a clone).
the only difference is that snapshots are read-only, while clones are
read-write. This is why I used the "-o ro,norecovery" option while
mounting XFS.
However I never saw anything
If your topology is like the following: HDD -> ZFS (ZVOL) -> DRBD -> XFS
then I believe it should make sense to always mount at the DRBD level and
not at the ZVOL level which happens to be the underlying blockdev for DRBD.
On Wed, Sep 6, 2017 at 12:28 PM, Jan Schermer wrote:
Not sure you can mount snapshot (I always create a clone).
However I never saw anything about “drbd” filesystem - what distribution is
this? Apparently it tries to be too clever…
Try creating a clone and mounting it instead, it’s safer anyway (saw bug in
issue tracker that ZFS panics if you try
On 19/08/2017 10:24, Yannis Milios wrote:
Option (b) seems more suitable for a 2 node drbd8 cluster in a
primary/secondary setup. Haven't tried it so I cannot tell if there are
any clurpits. My only concern in such setup would be if drbd corrupts
silently the data on the lower level and zfs is
Hello,
Personally I'm using option (a) on a 3 node proxmox cluster and drbd9.
Replica count per VM is 2 and all 3 nodes act as both drbd control volumes
and satellite nodes.I can live migrate VM over all 3 nodes without issues.
Snapshots are also possible via drbdmanage + zfs snapshot + clones
Il 18-08-2017 17:22 Veit Wahlich ha scritto:
Yes, I regard qemu -> DRBD -> volume management [-> RAID] -> disk the
most recommendable solution for this scenario.
I personally go with LVM thinp for volume management, but ZVOLs should
do the trick, too.
With named ressources (named after VMs)
Il 18-08-2017 17:09 Yannis Milios ha scritto:
Personally I'm using option (a) on a 3 node proxmox cluster and drbd9.
Replica count per VM is 2 and all 3 nodes act as both drbd control
volumes and satellite nodes.I can live migrate VM between all nodes
and snapshot them by using drbdmanage
Am Freitag, den 18.08.2017, 15:46 +0200 schrieb Gionatan Danti:
> Il 18-08-2017 14:40 Veit Wahlich ha scritto:
> > VM live migration requires primary/primary configuration of the DRBD
> > ressource accessed by the VM, but only during migration. The ressource
> > can be reconfigured for
Hello,
Personally I'm using option (a) on a 3 node proxmox cluster and drbd9.
Replica count per VM is 2 and all 3 nodes act as both drbd control volumes
and satellite nodes.I can live migrate VM between all nodes and snapshot
them by using drbdmanage utility (which is using zfs snapshot+clones).
Il 18-08-2017 14:40 Veit Wahlich ha scritto:
To clarify:
Am Freitag, den 18.08.2017, 14:34 +0200 schrieb Veit Wahlich:
hosts simultaniously, enables VM live migration and your hosts may
even
VM live migration requires primary/primary configuration of the DRBD
ressource accessed by the VM,
To clarify:
Am Freitag, den 18.08.2017, 14:34 +0200 schrieb Veit Wahlich:
> hosts simultaniously, enables VM live migration and your hosts may even
VM live migration requires primary/primary configuration of the DRBD
ressource accessed by the VM, but only during migration. The ressource
can be
Hi,
Am Freitag, den 18.08.2017, 14:16 +0200 schrieb Gionatan Danti:
> Hi, I plan to use a primary/secondary setup, with manual failover.
> In other words, split brain should not be possible at all.
>
> Thanks.
having one DRBD ressource per VM also allows you to run VMs on both
hosts
Il 18-08-2017 12:58 Julien Escario ha scritto:
If you design with a signle big ressource, a simple split brain and
you're screwed.
Julien
Hi, I plan to use a primary/secondary setup, with manual failover.
In other words, split brain should not be possible at all.
Thanks.
--
Danti Gionatan
Le 17/08/2017 à 16:48, Gionatan Danti a écrit :
> Hi list,
> I am discussing how to have a replicated ZFS setup on the ZoL mailing list,
> and
> DRBD is obviously on the radar ;)
>
> It seems that three possibilities exist:
>
> a) DRBD over ZVOLs (with one DRBD resource per ZVOL);
> b) ZFS over
Hi list,
I am discussing how to have a replicated ZFS setup on the ZoL mailing
list, and DRBD is obviously on the radar ;)
It seems that three possibilities exist:
a) DRBD over ZVOLs (with one DRBD resource per ZVOL);
b) ZFS over DRBD over the RAW disks (with DRBD resource per disk);
c) ZFS
23 matches
Mail list logo