Anyone using DRBD (8 or 9) with ZFS?
Any suggestion/howto?
I know that ZFS would like to have direct access to disks, but with
ZFS this won't be possible.
Any drawbacks?
Additionally, is DRBDv9 with 3way replication considered stable for
production use ?
DRBD8 with only 2 servers is too subject
2017-02-26 10:33 GMT+01:00 Rabin Yasharzadehe :
> what about putting DRBD over ZVOL ?
If possible, I have no issue in doing this.
Anyone using DRBD over ZFS ?
___
drbd-user mailing list
drbd-user@lists.linbit.com
Il 18 ott 2016 8:20 PM, "Lars Ellenberg" ha
scritto:
>
> There is no "write quorum" yet, but I'm working on that.
>
Any ETA about this?
> Data divergence is still very much possible.
>
> The DRBD 8.4 integration with pacemaker and fencing mechanisms
> is proven to
2016-10-20 15:14 GMT+02:00 Lars Ellenberg :
> You end up with a system that will NOT experience data divergence,
> unless you force it to.
Ok.
> But you may run into (multiple failure, mind you!) situations
> where you are offline, rather than risk to go online
> with
Hi to all
i would like to spin up a new shared storage
should i use drbd 8 or 9?
Additionally and more important: are there any ways to totally avoid
splitbrains? Obviously, the network used for sync is fully redundant (at
least 2 or 3 bonded interfaces)
Is this enough?
if i understood properly,
2016-10-15 13:56 GMT+02:00 Dennis Jacobfeuerborn :
> Keep in mind that it is never possible to be 100% sure that no
> split-brain can occur. You might always run into freak accidents.
Yes, I know. 100% availability is a myth.
> When you say you use bonded interfaces for
Il 17 ott 2016 09:01, "Jan Schermer" ha scritto:
>
> 3 storages, many more hypervisors, data triplicated... that's the usual
scenario
Are you using drbd9 with a 3 nodes replication?
could you please share the drbd config?
> We use ZFS on the storages, ZVOLs on top of that,
Il 16 ott 2016 19:19, "Jan Schermer" ha scritto:
>
> That would be us :)
Really? Can you describe your infrastructure?
> There seems to be some confusion.
> Do you want to assemble ZFS on top of DRBD devices or do you want to use
ZFS instead of LVM?
I would like to use zfs on
2017-10-12 10:17 GMT+02:00 Robert Altnoeder :
> While it is not "bad", it limits the system to an active-passive cluster
> configuration, because all logical volumes must be active on the same node.
> The standard setup that we teach in our trainings, that we commonly
Hi to all,
Some questions about DRBDv9 (i'm really new to DRBD and DRBDv9 seems
to be a major refactor):
Let's assume a 3-node cluster
a) should I use RAID or creating resources on raw-disks would be ok?
b) can I use ZFS on top of DRBD ?
c) what if I have to aggragate multiple resources to
Just trying to figure out if drbd9 can do the job.
Requirement: a scale-out storage for VMs image hosting (and other
services, but they would be made by creating, in example, an NFS VM on
top of DRBD)
Let's assume a 3-nodes DRBDv9 cluster.
I would like to share this cluster by using iSCSI (or
VM (thin or thick) -> DRBD or
> HDD -> ZFS (thin or thick) -> DRBD.
> Complicated management...
>
> On Wed, 11 Oct 2017 at 20:52, Gandalf Corvotempesta <
> gandalf.corvotempe...@gmail.com> wrote:
>
>> 2017-10-11 21:22 GMT+02:00 Adam Goryachev <mail
d not supported.
How to prevent splitbrains ? Would be enough to bond the cluster
network ? Any qdevice or fencing to configure ?
2017-10-11 21:07 GMT+02:00 Adam Goryachev <mailingli...@websitemanagers.com.au>:
>
>
> On 12/10/17 05:10, Gandalf Corvotempesta wrote:
>>
>> Pre
Previously i've asked about DRBDv9+ZFS.
Let's assume a more "standard" setup with DRBDv8 + mdadm.
What I would like to archieve is a simple redundant SAN. (anything
preconfigured for this ?)
Which is best, raid1+drbd+lvm or drbd+raid1+lvm?
Any advantage by creating multiple drbd resources ? I
2017-10-11 21:22 GMT+02:00 Adam Goryachev :
> You can also do that with raid + lvm + drbd... you just need to create a new
> drbd as you add a new LV, and also resize the drbd after you resize the LV.
I prefere to keep drbd as minimum. I'm much more familiar
Il giorno ven 11 mag 2018 alle ore 11:58 Robert Altnoeder <
robert.altnoe...@linbit.com> ha scritto:
> If it s supposed to become a storage system (e.g., one that is used by
> the Hypervisors via NFS), then the whole thing is a different story, and
> we may be talking about an active/passive NFS
Il giorno mer 2 mag 2018 alle ore 08:33 Paul O'Rorke <
p...@tracker-software.com> ha scritto:
> I create a large data drive out of a bunch of small SSDs using RAID and
> make that RAID drive an LVM PV.
>
> I can then create LVM volume groups and volumes for each use (in my case
> virtual drives
Hi to all
Let's assume 3 servers with 12 disks each
Would you create one resource per disk and then manage them with something
like LVM or a single resource from a huge volume over all disks?
___
drbd-user mailing list
drbd-user@lists.linbit.com
18 matches
Mail list logo