[ovirt-users] Re: HA Storage options

2020-08-19 Thread Vojtech Juranek
On úterý 18. srpna 2020 1:37:09 CEST David White via Users wrote:
> Hi,
> I started an email thread a couple months ago, and felt like I got some
> great feedback and suggestions on how to best setup an oVirt cluster.
> Thanks for your responses thus far.My goal is to take a total of 3-4
> servers that I can use for both the storage and the virtualization, and I
> want both to be highly available. 
> 
> You guys told me about oVirt Hyperconverged with Gluster, and that seemed
> like a great option. However, I'm concerned that this may not actually be
> the best approach. I've spoken with multiple people at Red Hat who I have a
> relationship with (outside of the context of the project I'm working on
> here), and all of them have indicated to me that Gluster is being
> deprecated, and that most of the engineering focus these days is on Ceph. I
> was also told by a Solutions Architect who has extensive experience with
> RHV that the hyperconverged clusters he used to build would always give him
> problems.
> 
> Does oVirt support DRBD or Ceph storage? From what I can find, I think that
> the answer to both of those is, sadly, no.

ceph (and anything supported by cindelib) is available via Managed block 
storage:

https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html


> So now I'm thinking about switching gears, and going with iSCSI instead. 
> But I'm still trying to think about the best way to replicate the storage,
> and possibly use multipathing so that it will be HA for the VMs that rely
> on it.
> 
> Has anyone else experienced problems with the Gluster hyperconverged
> solution? Am I overthinking this whole thing, and am I being too paranoid?
> Is it possible to setup some sort of software-RAID with multiple iSCSI
> targets? 
> 
> As an aside, I now have a machine that I was planning to begin doing some
> testing and practicing with. Previous to my conversations with the folks at
> Red Hat, I was planning on doing some initial testing and config with this
> server before purchasing another 2-3 servers to build the hyperconverged
> cluster.
> 
> Sent with ProtonMail Secure Email.



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DFLKXVLMPN4ZTPVQJXDB33XZCABD3JR2/


[ovirt-users] Re: HA Storage options

2020-08-18 Thread Strahil Nikolov via Users
Hi David,

i admit that I have some cases with Gluster Hyperconverged,  but having in mind 
that I'm always on the bleeding edge (I'm a little bit behind on my oVirt now)  
with 1 Gluster version ahead  of the default for oVirt ->  I think it's 
acceptable.

It doesn't matter which storage solution you will take,  you have to keep in 
mind that every upstream project will have bugs (For example, latest secure 
boot issue in RHEL7/8, many linux and windows versions) and even downstream. In 
order to prevent  serious outages ,you need 2 things (no matter oVirt or 
downstream projects from RH and Oracle):
- Good patch management. You need  locked repos,  so once you test repoX, you 
can use the same for prod
- Proper testing !!!  You need to test  at least: VM power off/on/live  
migrate,  snapshot create/restore/delete , Node 
maintenance/powerdown/powerup/activate,  vm consoles ,  and anything that comes 
to your mind.

Without proper testing or without same packages for test and prod  -> 
everything is useless.

P.S.: Developers  are using CentOS stream for development, so if you are 
wandering - CentOS8 vs Stream , most probably Stream will be better.

Best Regards,
Strahil Nikolov

На 18 август 2020 г. 3:02:11 GMT+03:00, Jayme  написа:
>I think you are perhaps overthinking a tad. Glusterfs is a fine
>solution
>but it has had a rocky road. It would not be my first suggestion if you
>are
>seeking high level write performance although that has been improving
>and
>can be fine tuned. Instability at least in the past was mostly centered
>around cluster upgrades. Untouched gluster is solid and practically
>takes
>care of itself. There are definitely more eggs in one basket when
>dealing
>with hyperconverged in general.
>
>AFAIK ovirt does not support drbd storage nor ceph although I think
>ceph
>May be planned in true future. I’m not aware of any plans to abandon
>glusterfs.
>
>The best piece of advice I could offer from experience running hci over
>the
>past few years is to not rush to updating to the latest release right
>away.
>
>On Mon, Aug 17, 2020 at 8:39 PM David White via Users 
>wrote:
>
>> Hi,
>> I started an email thread a couple months ago, and felt like I got
>some
>> great feedback and suggestions on how to best setup an oVirt cluster.
>> Thanks for your responses thus far.
>> My goal is to take a total of 3-4 servers that I can use for *both*
>the
>> storage *and* the virtualization, and I want both to be highly
>available.
>>
>> You guys told me about oVirt Hyperconverged with Gluster, and that
>seemed
>> like a great option. However, I'm concerned that this may not
>actually be
>> the best approach. I've spoken with multiple people at Red Hat who I
>have a
>> relationship with (outside of the context of the project I'm working
>on
>> here), and all of them have indicated to me that Gluster is being
>> deprecated, and that most of the engineering focus these days is on
>Ceph. I
>> was also told by a Solutions Architect who has extensive experience
>with
>> RHV that the hyperconverged clusters he used to build would always
>give him
>> problems.
>>
>> Does oVirt support DRBD or Ceph storage? From what I can find, I
>think
>> that the answer to both of those is, sadly, no.
>>
>> So now I'm thinking about switching gears, and going with iSCSI
>instead.
>> But I'm still trying to think about the best way to replicate the
>storage,
>> and possibly use multipathing so that it will be HA for the VMs that
>rely
>> on it.
>>
>> Has anyone else experienced problems with the Gluster hyperconverged
>> solution?
>> Am I overthinking this whole thing, and am I being too paranoid?
>> Is it possible to setup some sort of software-RAID with multiple
>iSCSI
>> targets?
>>
>> As an aside, I now have a machine that I was planning to begin doing
>some
>> testing and practicing with.
>> Previous to my conversations with the folks at Red Hat, I was
>planning on
>> doing some initial testing and config with this server before
>purchasing
>> another 2-3 servers to build the hyperconverged cluster.
>>
>>
>> Sent with ProtonMail  Secure Email.
>>
>> ___
>>
>> Users mailing list -- users@ovirt.org
>>
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>>
>> List Archives:
>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHYBWFGV74OUGQJVBNPK3D4HM2FQPMYC/
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KJHMQEC5XR437K65B6GUAF7J3J2GJVN/


[ovirt-users] Re: HA Storage options

2020-08-17 Thread Jayme
I think you are perhaps overthinking a tad. Glusterfs is a fine solution
but it has had a rocky road. It would not be my first suggestion if you are
seeking high level write performance although that has been improving and
can be fine tuned. Instability at least in the past was mostly centered
around cluster upgrades. Untouched gluster is solid and practically takes
care of itself. There are definitely more eggs in one basket when dealing
with hyperconverged in general.

AFAIK ovirt does not support drbd storage nor ceph although I think ceph
May be planned in true future. I’m not aware of any plans to abandon
glusterfs.

The best piece of advice I could offer from experience running hci over the
past few years is to not rush to updating to the latest release right away.

On Mon, Aug 17, 2020 at 8:39 PM David White via Users 
wrote:

> Hi,
> I started an email thread a couple months ago, and felt like I got some
> great feedback and suggestions on how to best setup an oVirt cluster.
> Thanks for your responses thus far.
> My goal is to take a total of 3-4 servers that I can use for *both* the
> storage *and* the virtualization, and I want both to be highly available.
>
> You guys told me about oVirt Hyperconverged with Gluster, and that seemed
> like a great option. However, I'm concerned that this may not actually be
> the best approach. I've spoken with multiple people at Red Hat who I have a
> relationship with (outside of the context of the project I'm working on
> here), and all of them have indicated to me that Gluster is being
> deprecated, and that most of the engineering focus these days is on Ceph. I
> was also told by a Solutions Architect who has extensive experience with
> RHV that the hyperconverged clusters he used to build would always give him
> problems.
>
> Does oVirt support DRBD or Ceph storage? From what I can find, I think
> that the answer to both of those is, sadly, no.
>
> So now I'm thinking about switching gears, and going with iSCSI instead.
> But I'm still trying to think about the best way to replicate the storage,
> and possibly use multipathing so that it will be HA for the VMs that rely
> on it.
>
> Has anyone else experienced problems with the Gluster hyperconverged
> solution?
> Am I overthinking this whole thing, and am I being too paranoid?
> Is it possible to setup some sort of software-RAID with multiple iSCSI
> targets?
>
> As an aside, I now have a machine that I was planning to begin doing some
> testing and practicing with.
> Previous to my conversations with the folks at Red Hat, I was planning on
> doing some initial testing and config with this server before purchasing
> another 2-3 servers to build the hyperconverged cluster.
>
>
> Sent with ProtonMail  Secure Email.
>
> ___
>
> Users mailing list -- users@ovirt.org
>
> To unsubscribe send an email to users-le...@ovirt.org
>
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
>
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHYBWFGV74OUGQJVBNPK3D4HM2FQPMYC/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBTHXSUX7GOWK5NYPCCEUW2BFYZIYVBX/