[ovirt-users] Re: Gluster setup for oVirt

2022-09-18 Thread Strahil Nikolov via Users
It's not outdated but it's rarely necessary because:- the HostedEngine volume 
must be dedicated for the HE VM. Setting enough space for the bricks is enough 
- for example physical disk is 100GB, Thinpool is 100GB, thin LV 84GB (16GB 
should be put aside for the metadata). In such configuration, you can't go out 
of space.I/O overhead can be minimized by setting the full stripe size (and the 
thinpool chunk size) between 1MiB and 2MiB (preferably in the lower end).
One big advantage of the thinpool-based bricks is that you can create snapshots 
of the volume. Usually I shut down the engine, snapshot the volume and then 
power up and do the necessary actions (for example Engine upgrade).
Best Regards,Strahil Nikolov 
 
 
  On Sun, Sep 18, 2022 at 22:50, Jonas wrote:   

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv

> The logical volumes that comprise the engine gluster volume must be thick 
> provisioned. This protects the Hosted Engine from out of space conditions, 
> disruptive volume configuration changes, I/O overhead, and migration activity.

Or is that extremely dated information?

Am 18. September 2022 21:08:45 MESZ schrieb Strahil Nikolov via Users 
:
>Can you share the RH recommendation to use Thick LV ?
>Best Regards,Strahil Nikolov  
> 
>    I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
>requires a thin pool: 
>https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: 
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives: 
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/
>  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3Z7QEY4CMZD5QFGFGMNABHVAGHK5IWU/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBDBPEVGJLKB4XLYWCN4BJTPW4WYG43J/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-18 Thread Jonas


https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv

> The logical volumes that comprise the engine gluster volume must be thick 
> provisioned. This protects the Hosted Engine from out of space conditions, 
> disruptive volume configuration changes, I/O overhead, and migration activity.

Or is that extremely dated information?

Am 18. September 2022 21:08:45 MESZ schrieb Strahil Nikolov via Users 
:
>Can you share the RH recommendation to use Thick LV ?
>Best Regards,Strahil Nikolov  
> 
>I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
> requires a thin pool: 
> https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: 
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives: 
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/
>  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3Z7QEY4CMZD5QFGFGMNABHVAGHK5IWU/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-18 Thread Strahil Nikolov via Users
Can you share the RH recommendation to use Thick LV ?
Best Regards,Strahil Nikolov  
 
I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
requires a thin pool: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NT2TXZHHVYRMBC5NGYYHJAPGLQWTZCHK/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-17 Thread jonas
I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
requires a thin pool: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-13 Thread Ritesh Chikatwar
Can you share the error which you get when you run.

On Tue, Sep 13, 2022 at 9:41 PM Jonas  wrote:

> Nevermind, I found this here:
>
> https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thick_lv_create.yml
>
> On 9/12/22 21:02, Jonas wrote:
> > Hello all
> >
> > I tried to setup Gluster volumes in cockpit using the wizard. Based on
> > Red Hat's recommendations I wanted to put the Volume for the oVirt
> > Engine on a thick provisioned logical volume [1] and therefore removed
> > the line thinpoolname and corresponding configuration from the yml
> > file (see below). Unfortunately, this approach was not successful. My
> > solution is now to only create a data volume and manually create a
> > thick provisioned gluster volume manually. What would you recommend
> > doing?
> >
> > Thanks your any input :)
> >
> > Regards,
> > Jonas
> >
> > [1]:
> >
> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv
> >
> > hc_nodes:
> >   hosts:
> > server-005.storage.int.rabe.ch:
> >   gluster_infra_volume_groups:
> > - vgname: vg_tier1_01
> >   pvname: /dev/md/raid_tier1_gluster
> >   gluster_infra_mount_devices:
> > - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
> >   lvname: lv_tier1_ovirt_engine_01
> >   vgname: vg_tier1_01
> > - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
> >   lvname: lv_tier1_ovirt_data_01
> >   vgname: vg_tier1_01
> >   blacklist_mpath_devices:
> > - raid_tier1_gluster
> >   gluster_infra_thinpools:
> > - vgname: vg_tier1_01
> >   thinpoolname: lv_tier1_ovirt_data_01_tp
> >   poolmetadatasize: 16G
> >   gluster_infra_lv_logicalvols:
> > - vgname: vg_tier1_01
> >   lvname: lv_tier1_ovirt_engine_01
> >   lvsize: 100G
> > - vgname: vg_tier1_01
> >   thinpool: lv_tier1_ovirt_data_01_tp
> >   lvname: lv_tier1_ovirt_data_01
> >   lvsize: 16000G
> > server-006.storage.int.rabe.ch:
> >   gluster_infra_volume_groups:
> > - vgname: vg_tier1_01
> >   pvname: /dev/md/raid_tier1_gluster
> >   gluster_infra_mount_devices:
> > - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
> >   lvname: lv_tier1_ovirt_engine_01
> >   vgname: vg_tier1_01
> > - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
> >   lvname: lv_tier1_ovirt_data_01
> >   vgname: vg_tier1_01
> >   blacklist_mpath_devices:
> > - raid_tier1_gluster
> >   gluster_infra_thinpools:
> > - vgname: vg_tier1_01
> >   thinpoolname: lv_tier1_ovirt_data_01_tp
> >   poolmetadatasize: 16G
> >   gluster_infra_lv_logicalvols:
> > - vgname: vg_tier1_01
> >   lvname: lv_tier1_ovirt_engine_01
> >   lvsize: 100G
> > - vgname: vg_tier1_01
> >   thinpool: lv_tier1_ovirt_data_01_tp
> >   lvname: lv_tier1_ovirt_data_01
> >   lvsize: 16000G
> > server-007.storage.int.rabe.ch:
> >   gluster_infra_volume_groups:
> > - vgname: vg_tier0_01
> >   pvname: /dev/md/raid_tier0_gluster
> >   gluster_infra_mount_devices:
> > - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
> >   lvname: lv_tier1_ovirt_engine_01
> >   vgname: vg_tier0_01
> > - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
> >   lvname: lv_tier1_ovirt_data_01
> >   vgname: vg_tier0_01
> >   blacklist_mpath_devices:
> > - raid_tier0_gluster
> >   gluster_infra_thinpools:
> > - vgname: vg_tier0_01
> >   thinpoolname: lv_tier1_ovirt_data_01_tp
> >   poolmetadatasize: 1G
> >   gluster_infra_lv_logicalvols:
> > - vgname: vg_tier0_01
> >   lvname: lv_tier1_ovirt_engine_01
> >   lvsize: 20G
> > - vgname: vg_tier0_01
> >   thinpool: lv_tier1_ovirt_data_01_tp
> >   lvname: lv_tier1_ovirt_data_01
> >   lvsize: 32G
> >   vars:
> > gluster_infra_disktype: JBOD
> > gluster_infra_daling: 1024K
> > gluster_set_selinux_labels: true
> > gluster_infra_fw_ports:
> >   - 2049/tcp
> >   - 54321/tcp
> >   - 5900/tcp
> >   - 5900-6923/tcp
> >   - 5666/tcp
> >   - 16514/tcp
> > gluster_infra_fw_permanent: true
> > gluster_infra_fw_state: enabled
> > gluster_infra_fw_zone: public
> > gluster_infra_fw_services:
> >   - glusterfs
> > gluster_features_force_varlogsizecheck: false
> > cluster_nodes:
> >   - server-005.storage.int.rabe.ch
> >   - server-006.storage.int.rabe.ch
> >   - server-007.storage.int.rabe.ch
> > gluster_features_hci_cluster: '{{ cluster_nodes }}'
> > gluster_features_hci_volumes:
> >   - volname: 

[ovirt-users] Re: Gluster setup for oVirt

2022-09-13 Thread Jonas
Nevermind, I found this here: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thick_lv_create.yml


On 9/12/22 21:02, Jonas wrote:

Hello all

I tried to setup Gluster volumes in cockpit using the wizard. Based on 
Red Hat's recommendations I wanted to put the Volume for the oVirt 
Engine on a thick provisioned logical volume [1] and therefore removed 
the line thinpoolname and corresponding configuration from the yml 
file (see below). Unfortunately, this approach was not successful. My 
solution is now to only create a data volume and manually create a 
thick provisioned gluster volume manually. What would you recommend 
doing?


Thanks your any input :)

Regards,
Jonas

[1]: 
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv


hc_nodes:
  hosts:
    server-005.storage.int.rabe.ch:
  gluster_infra_volume_groups:
    - vgname: vg_tier1_01
  pvname: /dev/md/raid_tier1_gluster
  gluster_infra_mount_devices:
    - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
  lvname: lv_tier1_ovirt_engine_01
  vgname: vg_tier1_01
    - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
  lvname: lv_tier1_ovirt_data_01
  vgname: vg_tier1_01
  blacklist_mpath_devices:
    - raid_tier1_gluster
  gluster_infra_thinpools:
    - vgname: vg_tier1_01
  thinpoolname: lv_tier1_ovirt_data_01_tp
  poolmetadatasize: 16G
  gluster_infra_lv_logicalvols:
    - vgname: vg_tier1_01
  lvname: lv_tier1_ovirt_engine_01
  lvsize: 100G
    - vgname: vg_tier1_01
  thinpool: lv_tier1_ovirt_data_01_tp
  lvname: lv_tier1_ovirt_data_01
  lvsize: 16000G
    server-006.storage.int.rabe.ch:
  gluster_infra_volume_groups:
    - vgname: vg_tier1_01
  pvname: /dev/md/raid_tier1_gluster
  gluster_infra_mount_devices:
    - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
  lvname: lv_tier1_ovirt_engine_01
  vgname: vg_tier1_01
    - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
  lvname: lv_tier1_ovirt_data_01
  vgname: vg_tier1_01
  blacklist_mpath_devices:
    - raid_tier1_gluster
  gluster_infra_thinpools:
    - vgname: vg_tier1_01
  thinpoolname: lv_tier1_ovirt_data_01_tp
  poolmetadatasize: 16G
  gluster_infra_lv_logicalvols:
    - vgname: vg_tier1_01
  lvname: lv_tier1_ovirt_engine_01
  lvsize: 100G
    - vgname: vg_tier1_01
  thinpool: lv_tier1_ovirt_data_01_tp
  lvname: lv_tier1_ovirt_data_01
  lvsize: 16000G
    server-007.storage.int.rabe.ch:
  gluster_infra_volume_groups:
    - vgname: vg_tier0_01
  pvname: /dev/md/raid_tier0_gluster
  gluster_infra_mount_devices:
    - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
  lvname: lv_tier1_ovirt_engine_01
  vgname: vg_tier0_01
    - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
  lvname: lv_tier1_ovirt_data_01
  vgname: vg_tier0_01
  blacklist_mpath_devices:
    - raid_tier0_gluster
  gluster_infra_thinpools:
    - vgname: vg_tier0_01
  thinpoolname: lv_tier1_ovirt_data_01_tp
  poolmetadatasize: 1G
  gluster_infra_lv_logicalvols:
    - vgname: vg_tier0_01
  lvname: lv_tier1_ovirt_engine_01
  lvsize: 20G
    - vgname: vg_tier0_01
  thinpool: lv_tier1_ovirt_data_01_tp
  lvname: lv_tier1_ovirt_data_01
  lvsize: 32G
  vars:
    gluster_infra_disktype: JBOD
    gluster_infra_daling: 1024K
    gluster_set_selinux_labels: true
    gluster_infra_fw_ports:
  - 2049/tcp
  - 54321/tcp
  - 5900/tcp
  - 5900-6923/tcp
  - 5666/tcp
  - 16514/tcp
    gluster_infra_fw_permanent: true
    gluster_infra_fw_state: enabled
    gluster_infra_fw_zone: public
    gluster_infra_fw_services:
  - glusterfs
    gluster_features_force_varlogsizecheck: false
    cluster_nodes:
  - server-005.storage.int.rabe.ch
  - server-006.storage.int.rabe.ch
  - server-007.storage.int.rabe.ch
    gluster_features_hci_cluster: '{{ cluster_nodes }}'
    gluster_features_hci_volumes:
  - volname: tier1-ovirt-engine-01
    brick: /gluster_bricks/tier1-ovirt-engine-01/gb-01
    arbiter: 1
  - volname: tier1-ovirt-data-01
    brick: /gluster_bricks/tier1-ovirt-data-01/gb-01
    arbiter: 1
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: