[ovirt-users] Re: ovirt 4.3.7 + Gluster in hyperconverged (production design)

2020-01-02 Thread Sahina Bose
On Tue, Dec 24, 2019 at 3:26 AM  wrote:

> Hi,
> After playing a bit with oVirt and Gluster in our pre-production
> environment for the last year, we have decided to move forward with a our
> production design using ovirt 4.3.7 + Gluster in a hyperconverged setup.
>
> For this we are looking get answers to a few questions that will help out
> with our design and  eventually lead to our production deployment phase:
>
> Current HW specs (total servers = 18):
> 1.- Server type: DL380 GEN 9
> 2.- Memory: 256GB
> 3.-Disk QTY per hypervisor:
> - 2x600GB SAS (RAID 0) for the OS
> - 9x1.2TB SSD (RAID[0, 6..]..? ) for GLUSTER.
> 4.-Network:
> - Bond1: 2x10Gbps
> - Bond2: 2x10Gbps (for migration and gluster)
>
> Our plan is to build 2x9 node clusters, however the following questions
> come up:
>
> 1.-Should we build 2 separate environments each with its own engine? or
> should we do 1 engine that will manage both clusters?
>

If you want the environments independent of each other, having separate
with its own engine make sense. Also the example ansible playbooks deploy
the environment with engine, so if you choose otherwise, you may need to
tweak the deployment scripts

2.-What would be the best gluster volume layout for #1 above with regards
> to RAID configuration:
> - JBOD or RAID6 or…?.
> - what is the benefit or downside of using JBOD vs RAID 6 for this
> particular scenario?
>
Preferably RAID6. With JBOD, you would have to create a 9x3 distributed
replicate volume and leave the healing of data to gluster, while with RAID
if you lose a disk this is at RAID controller level.


> 3.-Would you recommend Ansible-based deployment (if supported)? If yes
> where would I find the documentation for it? or should we just deploy using
> the UI?.
> - I have reviewed the following and in Chapter 5 it only mentions Web UI
> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/index#deployment_workflow
> - Also looked at
> https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment
> but could not get it to work properly.
>

If you want to deploy a 12 node setup, then an ansible playbook is
available. +Gobinda Das  in case you have questions on
this.

>
> 4.-what is the recommended max server qty in a hyperconverged setup with
> gluster, 12, 15, 18...?
>
>
12 is the tested configuration, but there's no technical limitation against
expanding to 15 or 18.

Thanks,
>
> Adrian
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZEIOQW5KXIF47SZDZPMLUBWTP5QUPMZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KDP4VN567PCD6VMUD77IWJO5FL4AHDZJ/


[ovirt-users] Re: ovirt 4.3.7 + Gluster in hyperconverged (production design)

2019-12-24 Thread Strahil

On Dec 24, 2019 02:18, Jayme  wrote:
>
> If you can afford it I would definitely do raid. Being able to monitor and 
> replace disks at the raid level is much easier than brick. With raid I’d do a 
> gluster arbiter setup so your aren’t losing too much space. 
>
> Keep an eye on libgfapi. It’s not default setting due to a few bugs but I’ve 
> been testing it in my ssd hci cluster and have been seeing up to 5x io 
> improvement.  Also jumbo frames on those 10Gbps switches. 
>
> Someone else will probably chime in re the other questions. I believe the GUI 
> can only deploy a three sever cluster then you have to add the remaining 
> hosts afterward. 
>
> On Mon, Dec 23, 2019 at 5:56 PM  wrote:
>>
>> Hi,
>> After playing a bit with oVirt and Gluster in our pre-production environment 
>> for the last year, we have decided to move forward with a our production 
>> design using ovirt 4.3.7 + Gluster in a hyperconverged setup.
>>
>> For this we are looking get answers to a few questions that will help out 
>> with our design and  eventually lead to our production deployment phase:
>>
>> Current HW specs (total servers = 18):
>> 1.- Server type: DL380 GEN 9
>> 2.- Memory: 256GB
>> 3.-Disk QTY per hypervisor:
>>     - 2x600GB SAS (RAID 0) for the OS
I would pick Raid1 and split that between OS and brick for engine 's gluster 
volume

>>     - 9x1.2TB SSD (RAID[0, 6..]..? ) for GLUSTER.
If you had bigger NICs I would preffer RAID0. Have you considered JBOD approach 
 (each disk is presented as a standalone LUN from the raid array) ?
Raid 6 is waste for your SSDs (when you already have replica 3  volumes). Ask 
HPE to provide SSDs from different manufacturing batches.  I think that with 
SSD thee best compromise will be RAID 5 with replica3 for faster reads.

As the SSDs will be overwritten with the same ammount of data, there could be a 
situation where  all 3  from a replica set will be in predictive failiure 
(which is not nice) - that's why a raid 5  is a good option.

>> 4.-Network:
>>     - Bond1: 2x10Gbps 
>>     - Bond2: 2x10Gbps (for migration and gluster)

Do not use Active-Backup bond as you will loose bandwidth. LACP (with hashing 
on layer 3+ layer4) is a good option, but it depends if your machines will be 
in the same LAN segment (communicating over layer 2 and no gateway inbetween).


>> Our plan is to build 2x9 node clusters, however the following questions come 
>> up:
>>
>> 1.-Should we build 2 separate environments each with its own engine? or 
>> should we do 1 engine that will manage both clusters?

If you decide to split - you have less fault tollerance - for example loosing 3 
ot of 18 is better than loosing 3 out of 9 nodes  :)

>> 2.-What would be the best gluster volume layout for #1 above with regards to 
>> RAID configuration:
>> - JBOD or RAID6 or…?.
RAID6 wastes alot of space
>> - what is the benefit or downside of using JBOD vs RAID 6 for this 
>> particular scenario?
>> 3.-Would you recommend Ansible-based deployment (if supported)? If yes where 
>> would I find the documentation for it? or should we just deploy using the 
>> UI?.
There is a topic in the mailing list about issues with ansible.It's up to you.
>> - I have reviewed the following and in Chapter 5 it only mentions Web UI 
>> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/index#deployment_workflow
>> - Also looked at 
>> https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment
>>  but could not get it to work properly.
>>
>> 4.-what is the recommended max server qty in a hyperconverged setup with 
>> gluster, 12, 15, 18...?
I don't remmember but I think it was 9 nodes for Hyperconverged setup in RHV.

>> Thanks,
>>
>> Adrian

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BTJY6L4VPDKDMSFLORKI6ZKDMEMPS4OQ/


[ovirt-users] Re: ovirt 4.3.7 + Gluster in hyperconverged (production design)

2019-12-23 Thread Jayme
If you can afford it I would definitely do raid. Being able to monitor and
replace disks at the raid level is much easier than brick. With raid I’d do
a gluster arbiter setup so your aren’t losing too much space.

Keep an eye on libgfapi. It’s not default setting due to a few bugs but
I’ve been testing it in my ssd hci cluster and have been seeing up to 5x io
improvement.  Also jumbo frames on those 10Gbps switches.

Someone else will probably chime in re the other questions. I believe the
GUI can only deploy a three sever cluster then you have to add the
remaining hosts afterward.

On Mon, Dec 23, 2019 at 5:56 PM  wrote:

> Hi,
> After playing a bit with oVirt and Gluster in our pre-production
> environment for the last year, we have decided to move forward with a our
> production design using ovirt 4.3.7 + Gluster in a hyperconverged setup.
>
> For this we are looking get answers to a few questions that will help out
> with our design and  eventually lead to our production deployment phase:
>
> Current HW specs (total servers = 18):
> 1.- Server type: DL380 GEN 9
> 2.- Memory: 256GB
> 3.-Disk QTY per hypervisor:
> - 2x600GB SAS (RAID 0) for the OS
> - 9x1.2TB SSD (RAID[0, 6..]..? ) for GLUSTER.
> 4.-Network:
> - Bond1: 2x10Gbps
> - Bond2: 2x10Gbps (for migration and gluster)
>
> Our plan is to build 2x9 node clusters, however the following questions
> come up:
>
> 1.-Should we build 2 separate environments each with its own engine? or
> should we do 1 engine that will manage both clusters?
> 2.-What would be the best gluster volume layout for #1 above with regards
> to RAID configuration:
> - JBOD or RAID6 or…?.
> - what is the benefit or downside of using JBOD vs RAID 6 for this
> particular scenario?
> 3.-Would you recommend Ansible-based deployment (if supported)? If yes
> where would I find the documentation for it? or should we just deploy using
> the UI?.
> - I have reviewed the following and in Chapter 5 it only mentions Web UI
> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html-single/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/index#deployment_workflow
> - Also looked at
> https://github.com/gluster/gluster-ansible/tree/master/playbooks/hc-ansible-deployment
> but could not get it to work properly.
>
> 4.-what is the recommended max server qty in a hyperconverged setup with
> gluster, 12, 15, 18...?
>
> Thanks,
>
> Adrian
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZEIOQW5KXIF47SZDZPMLUBWTP5QUPMZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXMSE62M2FWEQCSJXWOXS67K7CKNDZIA/