Re: [Users] Will this two node concept scale and work?
On Thu, Feb 6, 2014 at 9:05 AM, Sven Kieske wrote: > Hi, > > this one lead me to question which drbd version is or will > be available in EL 6/7(upcoming). > > My search so far just revealed there is no official supported > version for EL6 and maybe even worse, as far as I looked, will > not even be supported in EL7: > > https://access.redhat.com/site/discussions/669243 > > But I didn't check if the kernel modules are disabled or > just unsupported. > If you want complete support for DRBD on RH EL (5.x and 6.x at the moment as 7 is still in beta) see here (main part is accessible also without red hat portal login): https://access.redhat.com/site/solutions/32085 On that link there are also elrepo links to drbd 8.3 and 8.4 packages that could be used to begin using and evaluate for your needs (so not supported) You can also download from linbit after a free login registration or download source code otherwise. Partnership should be there since 2011: http://www.linbit.com/en/company/news/12-linbit-enhances-red-hat-enterprise-linux-with-full-drbd-support HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Will this two node concept scale and work?
Hi, this one lead me to question which drbd version is or will be available in EL 6/7(upcoming). My search so far just revealed there is no official supported version for EL6 and maybe even worse, as far as I looked, will not even be supported in EL7: https://access.redhat.com/site/discussions/669243 But I didn't check if the kernel modules are disabled or just unsupported. Am 05.02.2014 15:13, schrieb Jorick Astrego: > ou could use drbd9 but I haven't tested it extensively yet. -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Will this two node concept scale and work?
On Wed, 2014-02-05 at 08:49 -0500, Yedidyah Bar David wrote: > > > From: "ml ml" > To: Users@ovirt.org > Sent: Wednesday, February 5, 2014 12:45:55 PM > Subject: [Users] Will this two node concept scale and work? > > > > Hello List, > > > > my aim is to host multiple VMs which are redundant and are > high available. It should also scale well. > > > > I think usually people just buy a fat iSCSI Storage and attach > this. In my case it should scale well from very small nodes to > big ones. > > Therefore an iSCSI Target will bring a lot of overhead (10GBit > Links and two Paths, and really i should have a 2nd Hot > Standby SAN, too). This makes scalability very hard. > > > > > This post is also not meant to be a iscsi discussion. > > > > Since oVirt does not support DRBD out of the box i came up > with my own concept: > > > http://oi62.tinypic.com/2550xg5.jpg > > > > As far as i can tell i have the following advantages: > > > - i can start with two simple cheap nodes > > - i could add more disks to my nodes. Maybe even a SSD as a > dedicated drbd resource. > > - i can connect the two nodes directly to each other with > bonding or infiniband. i dont need a switch or something > between it. > > > Downside: > --- > > - i always need two nodes (as a couple) > > > > Will this setup work for me. So far i think i will be quite > happy with it. > > Since the DRBD Resources are shared in dual primary mode i am > not sure if ovirt can handle it. It is not allowed to write to > a vm disk at the same time. > > > I don't know ovirt enough to comment on that. > > > I did play in the past with drbd and libvirt (virsh). > Note that having both nodes primary all the time for all resources is > calling for a disaster. In any case of split brain, for any reason, > drbd > will not know what to do. > I second that, had many problems without proper fencing and even with fencing. > What I did was to allow both to be primary, but had only one primary > most of the time (per resource). I wrote a script to do migration, > which > made both primary for the duration of the migration (required by qemu) > and then moved the source to secondary when migration finished. This > way you still have a chance for a disaster, if there is a problem > (split > brain, node failure) during a migration. So if you decide to go this > way, > carefully plan and test to see that it works well for you. One source > for > a split brain, for me, at the time, was buggy nic drivers and bad > bonding > configuration. So test that well too if applicable. > > > The approach I took seems similar to "DRBD on LV level" in [1], but > with custom scripts and without ovirt. > > > You might be able to make ovirt do this for you with hooks. Didn't try > that.\ You could use drbd9 but I haven't tested it extensively yet. DRBD 9 has primary on write so you have both sides on passive until one of the nodes want's to write. It should automatically become primary then. This has been done by linbit to decrease split brain and expand to more than two nodes. http://www.drbd.org/users-guide-9.0/s-automatic-promotion.html But I don't know why it shouldn't work, maybe not with the node image but you can make a node of a normal rhel/centos/fedora install. One problem I always have with drbd and RHEL/Centos is that when you don't pay for the Linbit support, you don't get access to the repo and drbd is an additional option on RHEL. On Centos and Fedora the version is always lagging behind, so I have to compile the kernel module everytime for a new version or kernel update. > > An obvious downside to this approach is that if one node in a pair is > down, the other has no backup now. If you have multiple nodes and > external shared storage, multiple nodes can be down with no disruption > to service if the remaining nodes are capable enough. > > > [1] http://www.ovirt.org/Features/DRBD > > > Best regards, > -- > > Didi > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Will this two node concept scale and work?
> From: "ml ml" > To: Users@ovirt.org > Sent: Wednesday, February 5, 2014 12:45:55 PM > Subject: [Users] Will this two node concept scale and work? > Hello List, > my aim is to host multiple VMs which are redundant and are high available. It > should also scale well. > I think usually people just buy a fat iSCSI Storage and attach this. In my > case it should scale well from very small nodes to big ones. > Therefore an iSCSI Target will bring a lot of overhead (10GBit Links and two > Paths, and really i should have a 2nd Hot Standby SAN, too). This makes > scalability very hard. > This post is also not meant to be a iscsi discussion. > Since oVirt does not support DRBD out of the box i came up with my own > concept: > http://oi62 . tinypic .com/2550xg5. jpg > As far as i can tell i have the following advantages: > > - i can start with two simple cheap nodes > - i could add more disks to my nodes. Maybe even a SSD as a dedicated drbd > resource. > - i can connect the two nodes directly to each other with bonding or > infiniband . i dont need a switch or something between it. > Downside: > --- > - i always need two nodes (as a couple) > Will this setup work for me. So far i think i will be quite happy with it. > Since the DRBD Resources are shared in dual primary mode i am not sure if > ovirt can handle it. It is not allowed to write to a vm disk at the same > time. I don't know ovirt enough to comment on that. I did play in the past with drbd and libvirt (virsh). Note that having both nodes primary all the time for all resources is calling for a disaster. In any case of split brain, for any reason, drbd will not know what to do. What I did was to allow both to be primary, but had only one primary most of the time (per resource). I wrote a script to do migration, which made both primary for the duration of the migration (required by qemu) and then moved the source to secondary when migration finished. This way you still have a chance for a disaster, if there is a problem (split brain, node failure) during a migration. So if you decide to go this way, carefully plan and test to see that it works well for you. One source for a split brain, for me, at the time, was buggy nic drivers and bad bonding configuration. So test that well too if applicable. The approach I took seems similar to " DRBD on LV level " in [1], but with custom scripts and without ovirt. You might be able to make ovirt do this for you with hooks. Didn't try that. An obvious downside to this approach is that if one node in a pair is down, the other has no backup now. If you have multiple nodes and external shared storage, multiple nodes can be down with no disruption to service if the remaining nodes are capable enough. [1] http://www.ovirt.org/Features/DRBD Best regards, -- Didi ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Will this two node concept scale and work?
Hello Ron, thanks for your reply. This post is also not meant to be a iscsi discussion. >> >> Since oVirt does not support DRBD out of the box i came up with my own >> concept: >> > > check out posix storage domain. > If it supports gluster you might be able to use it for DRBD. > Sorry, I dont quite understand how posix will work with gluster here. What would the architechture look like and on what layer would it replicate. With "gluster" the glusterfs comes into my mind. How will drbd come into place here? Thanks a lot! Mario ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Will this two node concept scale and work?
see in line On 02/05/2014 10:45 AM, ml ml wrote: Hello List, my aim is to host multiple VMs which are redundant and are high available. It should also scale well. I'm assuming you are talking about HA cluster since redundancy vm and HA vms are a contradiction :) I think usually people just buy a fat iSCSI Storage and attach this. In my case it should scale well from very small nodes to big ones. Therefore an iSCSI Target will bring a lot of overhead (10GBit Links and two Paths, and really i should have a 2nd Hot Standby SAN, too). This makes scalability very hard. This post is also not meant to be a iscsi discussion. Since oVirt does not support DRBD out of the box i came up with my own concept: check out posix storage domain. If it supports gluster you might be able to use it for DRBD. http://oi62.tinypic.com/2550xg5.jpg As far as i can tell i have the following advantages: - i can start with two simple cheap nodes - i could add more disks to my nodes. Maybe even a SSD as a dedicated drbd resource. - i can connect the two nodes directly to each other with bonding or infiniband. i dont need a switch or something between it. Downside: --- - i always need two nodes (as a couple) Will this setup work for me. So far i think i will be quite happy with it. Since the DRBD Resources are shared in dual primary mode i am not sure if ovirt can handle it. It is not allowed to write to a vm disk at the same time. not true that you cannot write to the same vm disk at the same time - you have a shared disk option The Concept of Linbit (http://www.linbit.com/en/company/news/333-high-available-virtualization-at-a-most-reasonable-price) seems to much of an overhead with the iSCSI Layer and pacemaker setup. Its just too much for such a simple task. Please tell me that this concept is great and will work and scale well. Otherwise i am also thankful for any hints or critical ideas. Thanks a lot, Mario ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dafna Ron ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Will this two node concept scale and work?
Hello List, my aim is to host multiple VMs which are redundant and are high available. It should also scale well. I think usually people just buy a fat iSCSI Storage and attach this. In my case it should scale well from very small nodes to big ones. Therefore an iSCSI Target will bring a lot of overhead (10GBit Links and two Paths, and really i should have a 2nd Hot Standby SAN, too). This makes scalability very hard. This post is also not meant to be a iscsi discussion. Since oVirt does not support DRBD out of the box i came up with my own concept: http://oi62.tinypic.com/2550xg5.jpg As far as i can tell i have the following advantages: - i can start with two simple cheap nodes - i could add more disks to my nodes. Maybe even a SSD as a dedicated drbdresource. - i can connect the two nodes directly to each other with bonding or infiniband. i dont need a switch or something between it. Downside: --- - i always need two nodes (as a couple) Will this setup work for me. So far i think i will be quite happy with it. Since the DRBD Resources are shared in dual primary mode i am not sure if ovirt can handle it. It is not allowed to write to a vm disk at the same time. The Concept of Linbit ( http://www.linbit.com/en/company/news/333-high-available-virtualization-at-a-most-reasonable-price) seems to much of an overhead with the iSCSI Layer and pacemaker setup. Its just too much for such a simple task. Please tell me that this concept is great and will work and scale well. Otherwise i am also thankful for any hints or critical ideas. Thanks a lot, Mario ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users