[ovirt-users] Re: gluster on iSCSI devices in ovirt environment

2024-01-18 Thread Strahil Nikolov via Users
In such case, use iscsiadm to mount the lun (man iscsiadm has good examples) 
and follow standard gluster setup.Keep in mind that you might have reduced 
performance as your bandwidth will be shared between iscsi and gluster (in case 
you use the same bond).
Best Regards,Strahil Nikolov
 
 
  On Thu, Jan 18, 2024 at 10:43, p...@email.cz wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2HOVCHXS2IWYFYTOPCE2EMKFUKOC4F6/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGOODYV4JS2LKGZEUHIM7RSICGZET5HM/


[ovirt-users] Re: gluster on iSCSI devices in ovirt environment

2024-01-18 Thread p...@email.cz

Hello,
yes, U're right, but only as separate domain ( ? ( mirror realized by 
any clever storage on background ) , but if mirror needed over two 
locations ?
I had an idea realize mirror over two loacation via gluster ( with 
iscsi  bricks )

Pa.

On 1/18/24 09:21, Strahil Nikolov wrote:

Hi,

Why would you do that?
Ovirt already supports iSCSI.

Best Regards,
Strahil Nikolov

On Thu, Jan 18, 2024 at 10:20, p...@email.cz
 wrote:
hello dears,
can anybody explain  me HOWTO realize 2 nodes + aribiter gluster 
from two (three)  locations on block iSCSI devices ?

Something like this:
gluster volume create  TEST replica 3 arbiter 1
   < location-three-host3 - /dev/sda5 e.g. >  - ALL applied
on multinode ovirt cluster

thx a lot for any help

regs.
Pa.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBP7TKZNWLOCY7IAQNEAHWBQXRSQBPE5/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C2HOVCHXS2IWYFYTOPCE2EMKFUKOC4F6/


[ovirt-users] Re: gluster on iSCSI devices in ovirt environment

2024-01-18 Thread Strahil Nikolov via Users
Hi,
Why would you do that?Ovirt already supports iSCSI.
Best Regards,Strahil Nikolov 
 
  On Thu, Jan 18, 2024 at 10:20, p...@email.cz wrote: hello 
dears, 
 can anybody explain  me HOWTO realize 2 nodes + aribiter gluster  from two 
(three)  locations on block iSCSI devices ?
 
 Something like this:
 gluster volume create  TEST replica 3 arbiter 1      < location-three-host3 - 
/dev/sda5 e.g. >  - ALL applied on multinode ovirt cluster 
 
 thx a lot for any help
 
 regs.
 Pa. ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBP7TKZNWLOCY7IAQNEAHWBQXRSQBPE5/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSPREMXGSYXQIYC4VT6JEVWF4TQIAASS/


[ovirt-users] Re: Gluster: Ideas for migration

2023-11-20 Thread arlo . hawthorne
Hi Jonas, when migrating Gluster volumes from one cluster to another, there are 
a few approaches you can consider. While Red Hat primarily recommends replacing 
old bricks, there are alternative methods you can explore. One such approach 
involves recreating the volumes on the new cluster and using rsync to 
synchronize the data between the old and new clusters. Here's a step-by-step 
outline of this migration strategy:

1.Set up the new oVirt cluster and ensure it is properly configured and 
operational.
2.Create empty Gluster volumes on the new cluster with the same configuration 
(stripe count, replica count, etc.) as the volumes on the old cluster. You can 
use the Gluster command-line tools or the oVirt management interface to create 
the volumes.
3.Install rsync on both the old and new clusters if it's not already available. 
You can use your distribution's package manager to install it.
4.Stop any read/write operations on the volume you wish to migrate on the old 
cluster. This will ensure data consistency during the migration process.
5.Initiate an initial rsync operation from the old cluster to the new cluster 
to copy the data from the old volume to the newly created volume on the new 
cluster. The command would look something like this:


rsync -avhP /path/to/old/volume/ user@new_cluster:/path/to/new/volume/
```
This command will synchronize the data between the old and new volumes. You may 
need to adjust the paths and SSH details to match your setup.


6.Once the initial rsync is complete, you can perform incremental rsync 
operations to synchronize any changes that occurred on the old volume during 
the initial copy process. You can schedule these incremental rsyncs at regular 
intervals or use tools like cron to automate the process.
7.At a predetermined point in time when you're ready to cut over to the new 
cluster, stop the application or any processes that are accessing the old 
volume to ensure data integrity during the final synchronization.
8.Perform a final rsync operation to copy any remaining changes from the old 
volume to the new volume. This ensures that both volumes are in sync before the 
cutover. The command would be similar to the initial rsync operation.
9.Once the final rsync is complete, unmount the old volume from the application 
servers and mount the new volume at the same path.
10.Start the application or processes that were using the old volume, and 
ensure they are now accessing the new volume on the new cluster.

By following this approach, you can migrate Gluster volumes from one cluster to 
another while minimizing downtime and ensuring data consistency. Vinchin Backup 
& Recovery is a powerful data protection solution that supports a wide range of 
virtual environments, including oVirt. It can be a very useful tool when 
migrating Gluster volumes from one oVirt cluster to another. Remember to test 
this process in a non-production environment and have proper backups of your 
data before proceeding with the migration.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5AYL45FP63TKRBLL27DOBGFX766EK7DY/


[ovirt-users] Re: Gluster Geo-Replication session not visible in oVirt Manager UI

2023-03-20 Thread simon
I tested geo-replication DR to another site and was able to attach the Storage 
Domain and import the VMs to the other Data Center. With a stretched VLAN the 
VMs were instantly accessible.

It would still be good to see the Geo-Rep session in the WebUI.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VH2JKWBBRHE7Q4S6JC5ECSKQQDD6TBF4/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-18 Thread Strahil Nikolov via Users
It's not outdated but it's rarely necessary because:- the HostedEngine volume 
must be dedicated for the HE VM. Setting enough space for the bricks is enough 
- for example physical disk is 100GB, Thinpool is 100GB, thin LV 84GB (16GB 
should be put aside for the metadata). In such configuration, you can't go out 
of space.I/O overhead can be minimized by setting the full stripe size (and the 
thinpool chunk size) between 1MiB and 2MiB (preferably in the lower end).
One big advantage of the thinpool-based bricks is that you can create snapshots 
of the volume. Usually I shut down the engine, snapshot the volume and then 
power up and do the necessary actions (for example Engine upgrade).
Best Regards,Strahil Nikolov 
 
 
  On Sun, Sep 18, 2022 at 22:50, Jonas wrote:   

https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv

> The logical volumes that comprise the engine gluster volume must be thick 
> provisioned. This protects the Hosted Engine from out of space conditions, 
> disruptive volume configuration changes, I/O overhead, and migration activity.

Or is that extremely dated information?

Am 18. September 2022 21:08:45 MESZ schrieb Strahil Nikolov via Users 
:
>Can you share the RH recommendation to use Thick LV ?
>Best Regards,Strahil Nikolov  
> 
>    I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
>requires a thin pool: 
>https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: 
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives: 
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/
>  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3Z7QEY4CMZD5QFGFGMNABHVAGHK5IWU/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RBDBPEVGJLKB4XLYWCN4BJTPW4WYG43J/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-18 Thread Jonas


https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv

> The logical volumes that comprise the engine gluster volume must be thick 
> provisioned. This protects the Hosted Engine from out of space conditions, 
> disruptive volume configuration changes, I/O overhead, and migration activity.

Or is that extremely dated information?

Am 18. September 2022 21:08:45 MESZ schrieb Strahil Nikolov via Users 
:
>Can you share the RH recommendation to use Thick LV ?
>Best Regards,Strahil Nikolov  
> 
>I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
> requires a thin pool: 
> https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct: 
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives: 
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/
>  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I3Z7QEY4CMZD5QFGFGMNABHVAGHK5IWU/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-18 Thread Strahil Nikolov via Users
Can you share the RH recommendation to use Thick LV ?
Best Regards,Strahil Nikolov  
 
I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
requires a thin pool: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NT2TXZHHVYRMBC5NGYYHJAPGLQWTZCHK/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-17 Thread jonas
I don't have that anymore, but I assume that gluster_infra_lv_logicalvols 
requires a thin pool: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thin_volume_create.yml
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XQ4Q4SELENO6EMF4WUQKM27G55RPEM3O/


[ovirt-users] Re: Gluster setup for oVirt

2022-09-13 Thread Ritesh Chikatwar
Can you share the error which you get when you run.

On Tue, Sep 13, 2022 at 9:41 PM Jonas  wrote:

> Nevermind, I found this here:
>
> https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thick_lv_create.yml
>
> On 9/12/22 21:02, Jonas wrote:
> > Hello all
> >
> > I tried to setup Gluster volumes in cockpit using the wizard. Based on
> > Red Hat's recommendations I wanted to put the Volume for the oVirt
> > Engine on a thick provisioned logical volume [1] and therefore removed
> > the line thinpoolname and corresponding configuration from the yml
> > file (see below). Unfortunately, this approach was not successful. My
> > solution is now to only create a data volume and manually create a
> > thick provisioned gluster volume manually. What would you recommend
> > doing?
> >
> > Thanks your any input :)
> >
> > Regards,
> > Jonas
> >
> > [1]:
> >
> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv
> >
> > hc_nodes:
> >   hosts:
> > server-005.storage.int.rabe.ch:
> >   gluster_infra_volume_groups:
> > - vgname: vg_tier1_01
> >   pvname: /dev/md/raid_tier1_gluster
> >   gluster_infra_mount_devices:
> > - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
> >   lvname: lv_tier1_ovirt_engine_01
> >   vgname: vg_tier1_01
> > - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
> >   lvname: lv_tier1_ovirt_data_01
> >   vgname: vg_tier1_01
> >   blacklist_mpath_devices:
> > - raid_tier1_gluster
> >   gluster_infra_thinpools:
> > - vgname: vg_tier1_01
> >   thinpoolname: lv_tier1_ovirt_data_01_tp
> >   poolmetadatasize: 16G
> >   gluster_infra_lv_logicalvols:
> > - vgname: vg_tier1_01
> >   lvname: lv_tier1_ovirt_engine_01
> >   lvsize: 100G
> > - vgname: vg_tier1_01
> >   thinpool: lv_tier1_ovirt_data_01_tp
> >   lvname: lv_tier1_ovirt_data_01
> >   lvsize: 16000G
> > server-006.storage.int.rabe.ch:
> >   gluster_infra_volume_groups:
> > - vgname: vg_tier1_01
> >   pvname: /dev/md/raid_tier1_gluster
> >   gluster_infra_mount_devices:
> > - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
> >   lvname: lv_tier1_ovirt_engine_01
> >   vgname: vg_tier1_01
> > - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
> >   lvname: lv_tier1_ovirt_data_01
> >   vgname: vg_tier1_01
> >   blacklist_mpath_devices:
> > - raid_tier1_gluster
> >   gluster_infra_thinpools:
> > - vgname: vg_tier1_01
> >   thinpoolname: lv_tier1_ovirt_data_01_tp
> >   poolmetadatasize: 16G
> >   gluster_infra_lv_logicalvols:
> > - vgname: vg_tier1_01
> >   lvname: lv_tier1_ovirt_engine_01
> >   lvsize: 100G
> > - vgname: vg_tier1_01
> >   thinpool: lv_tier1_ovirt_data_01_tp
> >   lvname: lv_tier1_ovirt_data_01
> >   lvsize: 16000G
> > server-007.storage.int.rabe.ch:
> >   gluster_infra_volume_groups:
> > - vgname: vg_tier0_01
> >   pvname: /dev/md/raid_tier0_gluster
> >   gluster_infra_mount_devices:
> > - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
> >   lvname: lv_tier1_ovirt_engine_01
> >   vgname: vg_tier0_01
> > - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
> >   lvname: lv_tier1_ovirt_data_01
> >   vgname: vg_tier0_01
> >   blacklist_mpath_devices:
> > - raid_tier0_gluster
> >   gluster_infra_thinpools:
> > - vgname: vg_tier0_01
> >   thinpoolname: lv_tier1_ovirt_data_01_tp
> >   poolmetadatasize: 1G
> >   gluster_infra_lv_logicalvols:
> > - vgname: vg_tier0_01
> >   lvname: lv_tier1_ovirt_engine_01
> >   lvsize: 20G
> > - vgname: vg_tier0_01
> >   thinpool: lv_tier1_ovirt_data_01_tp
> >   lvname: lv_tier1_ovirt_data_01
> >   lvsize: 32G
> >   vars:
> > gluster_infra_disktype: JBOD
> > gluster_infra_daling: 1024K
> > gluster_set_selinux_labels: true
> > gluster_infra_fw_ports:
> >   - 2049/tcp
> >   - 54321/tcp
> >   - 5900/tcp
> >   - 5900-6923/tcp
> >   - 5666/tcp
> >   - 16514/tcp
> > gluster_infra_fw_permanent: true
> > gluster_infra_fw_state: enabled
> > gluster_infra_fw_zone: public
> > gluster_infra_fw_services:
> >   - glusterfs
> > gluster_features_force_varlogsizecheck: false
> > cluster_nodes:
> >   - server-005.storage.int.rabe.ch
> >   - server-006.storage.int.rabe.ch
> >   - server-007.storage.int.rabe.ch
> > gluster_features_hci_cluster: '{{ cluster_nodes }}'
> > gluster_features_hci_volumes:
> >   - volname: 

[ovirt-users] Re: Gluster setup for oVirt

2022-09-13 Thread Jonas
Nevermind, I found this here: 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/thick_lv_create.yml


On 9/12/22 21:02, Jonas wrote:

Hello all

I tried to setup Gluster volumes in cockpit using the wizard. Based on 
Red Hat's recommendations I wanted to put the Volume for the oVirt 
Engine on a thick provisioned logical volume [1] and therefore removed 
the line thinpoolname and corresponding configuration from the yml 
file (see below). Unfortunately, this approach was not successful. My 
solution is now to only create a data volume and manually create a 
thick provisioned gluster volume manually. What would you recommend 
doing?


Thanks your any input :)

Regards,
Jonas

[1]: 
https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/rhhi-requirements#rhhi-req-lv


hc_nodes:
  hosts:
    server-005.storage.int.rabe.ch:
  gluster_infra_volume_groups:
    - vgname: vg_tier1_01
  pvname: /dev/md/raid_tier1_gluster
  gluster_infra_mount_devices:
    - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
  lvname: lv_tier1_ovirt_engine_01
  vgname: vg_tier1_01
    - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
  lvname: lv_tier1_ovirt_data_01
  vgname: vg_tier1_01
  blacklist_mpath_devices:
    - raid_tier1_gluster
  gluster_infra_thinpools:
    - vgname: vg_tier1_01
  thinpoolname: lv_tier1_ovirt_data_01_tp
  poolmetadatasize: 16G
  gluster_infra_lv_logicalvols:
    - vgname: vg_tier1_01
  lvname: lv_tier1_ovirt_engine_01
  lvsize: 100G
    - vgname: vg_tier1_01
  thinpool: lv_tier1_ovirt_data_01_tp
  lvname: lv_tier1_ovirt_data_01
  lvsize: 16000G
    server-006.storage.int.rabe.ch:
  gluster_infra_volume_groups:
    - vgname: vg_tier1_01
  pvname: /dev/md/raid_tier1_gluster
  gluster_infra_mount_devices:
    - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
  lvname: lv_tier1_ovirt_engine_01
  vgname: vg_tier1_01
    - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
  lvname: lv_tier1_ovirt_data_01
  vgname: vg_tier1_01
  blacklist_mpath_devices:
    - raid_tier1_gluster
  gluster_infra_thinpools:
    - vgname: vg_tier1_01
  thinpoolname: lv_tier1_ovirt_data_01_tp
  poolmetadatasize: 16G
  gluster_infra_lv_logicalvols:
    - vgname: vg_tier1_01
  lvname: lv_tier1_ovirt_engine_01
  lvsize: 100G
    - vgname: vg_tier1_01
  thinpool: lv_tier1_ovirt_data_01_tp
  lvname: lv_tier1_ovirt_data_01
  lvsize: 16000G
    server-007.storage.int.rabe.ch:
  gluster_infra_volume_groups:
    - vgname: vg_tier0_01
  pvname: /dev/md/raid_tier0_gluster
  gluster_infra_mount_devices:
    - path: /gluster_bricks/tier1-ovirt-engine-01/gb-01
  lvname: lv_tier1_ovirt_engine_01
  vgname: vg_tier0_01
    - path: /gluster_bricks/tier1-ovirt-data-01/gb-01
  lvname: lv_tier1_ovirt_data_01
  vgname: vg_tier0_01
  blacklist_mpath_devices:
    - raid_tier0_gluster
  gluster_infra_thinpools:
    - vgname: vg_tier0_01
  thinpoolname: lv_tier1_ovirt_data_01_tp
  poolmetadatasize: 1G
  gluster_infra_lv_logicalvols:
    - vgname: vg_tier0_01
  lvname: lv_tier1_ovirt_engine_01
  lvsize: 20G
    - vgname: vg_tier0_01
  thinpool: lv_tier1_ovirt_data_01_tp
  lvname: lv_tier1_ovirt_data_01
  lvsize: 32G
  vars:
    gluster_infra_disktype: JBOD
    gluster_infra_daling: 1024K
    gluster_set_selinux_labels: true
    gluster_infra_fw_ports:
  - 2049/tcp
  - 54321/tcp
  - 5900/tcp
  - 5900-6923/tcp
  - 5666/tcp
  - 16514/tcp
    gluster_infra_fw_permanent: true
    gluster_infra_fw_state: enabled
    gluster_infra_fw_zone: public
    gluster_infra_fw_services:
  - glusterfs
    gluster_features_force_varlogsizecheck: false
    cluster_nodes:
  - server-005.storage.int.rabe.ch
  - server-006.storage.int.rabe.ch
  - server-007.storage.int.rabe.ch
    gluster_features_hci_cluster: '{{ cluster_nodes }}'
    gluster_features_hci_volumes:
  - volname: tier1-ovirt-engine-01
    brick: /gluster_bricks/tier1-ovirt-engine-01/gb-01
    arbiter: 1
  - volname: tier1-ovirt-data-01
    brick: /gluster_bricks/tier1-ovirt-data-01/gb-01
    arbiter: 1
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-15 Thread Jiří Sléžka

Hi,

finally this post helped

https://lists.ovirt.org/archives/list/users@ovirt.org/message/CL4MI3IJH6MPDXS3B23FQ3BDJXHHSKAG/

invisible locked entry is missing time_zone in HostedEngine configuration...

/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "update vm_static 
set time_zone='Etc/GMT' where vm_name='HostedEngine'"


after this I can change CPU type in cluster and gluster services 
checkbox stay checked.


Thanks for support,

Jiri


On 8/4/22 20:38, Strahil Nikolov wrote:

Go to the host running the HostedEngine VM and dump the xml via virsh.
Then power cycle the engine and check if it fixed the issue with the CPU.

Best Regards,
Strahil Nikolov

On Wed, Aug 3, 2022 at 23:58, Jiří Sléžka
 wrote:
Dne 8/3/22 v 03:06 Strahil Nikolov napsal(a):
 > I think it's related to Compute -> Clusters -> Cluster Name ->
Gluster Hooks
 >
 > I think https://access.redhat.com/solutions/6644151
should solve the
 > problem (you can use a developer subscription to access it).

thanks, I really had 5 hook conflicts

/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select
id,name,hook_status,content_type,conflict_status from gluster_hooks
where conflict_status != 0";

                   id                  |        name        |
hook_status | content_type | conflict_status

--++-+--+-
   517462b4-104d-40d1-ac94-3f8baea8e80b | 30samba-start.sh  | ENABLED
   | TEXT        |              4
   d428056d-f6fd-4e56-a48a-ccbdd273b774 | 30samba-set.sh    | ENABLED
   | TEXT        |              4
   a1d8857a-9378-42af-81a8-89a4c75eb52e | 30samba-stop.sh    | ENABLED
   | TEXT        |              4
   af362bbf-d1ea-4d5e-ae07-492c7ce0966f | 29CTDBsetup.sh    | ENABLED
   | TEXT        |              4
   d3bdf3df-13f1-48d8-92d9-03d09989516f | 29CTDB-teardown.sh | ENABLED
   | TEXT        |              4
(5 rows)

I removed them and then synced gluster hooks in cluster

Also diagnostic step

rpm -qV glusterfs-server

revealed that on one of hosts are some hooks missing

[root@ovirt-hci01  ~]# rpm -qV glusterfs-server
.M...  c /var/lib/glusterd/glusterd.info
missing    /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
missing    /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
missing    /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
missing    /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
missing    /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh

I reinstalled glusterfs-server package there

Well, I do this only to change CPU Type in cluster but now when Gluster
services are checked and I try to change CPU Type I got

"Error while executing action: Cannot update cluster because the update
triggered update of the VMs/Templates and it failed for the following:
HostedEngine. To fix the issue, please go to each of them, edit, change
the Custom Compatibility Version (or other fields changed previously in
the cluster dialog) and press OK. If the save does not pass, fix the
dialog validation. After successful cluster update, you can revert your
Custom Compatibility Version change (or other changes). If the problem
still persists, you may refer to the engine.log file for further
details."

Strange thing and probably bug - this action disables Gluster services
checkbox in cluster!!! Will try to report it...

Also I have no idea what is wrong with HostedEngine as there is (as I
can see) no custom settings on it... but I cannot change for example
memory on it because "There was an attempt to change Hosted Engine VM
values that are locked."

2022-08-03 22:32:01,436+02 WARN
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-3193)
[b93958d9-b27d-4f1b-97f0-d78312c2d346] Validation of action 'UpdateVm'
failed for user admin@internal-authz. 
Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD


Cheers,

Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Wed, Aug 3, 2022 at 1:51, Jiří Sléžka
 >    mailto:jiri.sle...@slu.cz>> wrote:
 >    ___
 >    Users mailing list -- users@ovirt.org 
>
 >    To unsubscribe send an email to users-le...@ovirt.org

 >    >
 >    Privacy Statement: https://www.ovirt.org/privacy-policy.html

 >    

[ovirt-users] Re: Gluster network - associate brick

2022-08-06 Thread Strahil Nikolov via Users
If you wish the Gluster traffic to be over the 172.16.20.X/24, you will have to 
change the bricks in the volume to 172.16.20.X:/gluster_bricks/vmstore/vmstore
The simplest way is to:gluster volume remove-brick VOLUMENAME replica 2 
node3.mydomain.lab:/gluster_bricks/data/data force
# node3umount /gluster_bricks/datamkfs.xfs -f -i size=512 
/dev/GLUSTER_VG/GLUSTER_LVmount /gluster_bricks/datamkdir 
/gluster_bricks/data/datachown 36:36 -R  /gluster_bricks/data/datarestorecon 
-RFvv /gluster_bricks/data
# If you have entries in /etc/hosts or in the DNS , you can swap the IP with 
itgluster volume add-brick VOLUMENAME replica 3 
172.16.20.X:/gluster_bricks/data/datagluster volume heal VOLUMENAME full
#Wait untill the volume heals and repeat with the other 2 bricks.
Of course, if it's a brand new setup -> it's easier to wipe the disks and then 
reinstall the nodes to start fresh .
Best Regards,Strahil Nikolov  
 
  On Fri, Aug 5, 2022 at 18:56, r greg wrote:   hi all,

*** new to oVirt and still learning ***

Sorry for the long thread...

I have a 3x node hyperconverged setup on v4.5.1. 

4x 1G NICS

NIC0 
> ovirtmgmt (Hosted-Engine VM)
> vmnetwork vlan102 (all VMs are placed on this network)
NIC1
> migration
NIC2 - NIC3 > bond0
> storage

Logical Networks:
ovirtmgmt - role: VM network | management | display | default route
vmnetwork - role: VM network
migrate - role: migration network
storage - role: gluster network

During deployment, I overlooked a setting and on node2 the host was deployed 
with Name: node2.mydomain.lab --- Hostname/IP: 172.16.20.X/24 (WebUI > Compute 
> Hosts)

I suspect because of this I see the following entries on 
/var/log/ovirt-engine/engine.log (only for node2)

2022-08-04 12:00:15,460Z WARN 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-16) [] 
Could not associate brick 'node2.mydomain.lab:/gluster_bricks/vmstore/vmstore' 
of volume '1ca6a01a-9230-4bb1-844e-8064f3eadb53' with correct network as no 
gluster network found in cluster '1770ade4-0f6f-11ed-b8f6-00163e6faae8'

Is this something I need to be worried about or correct somehow?

>From node1

gluster> peer status
Number of Peers: 2

Hostname: node2.mydomain.lab
Uuid: a4468bb0-a3b3-42bc-9070-769da5a13427
State: Peer in Cluster (Connected)
Other names:
172.16.20.X

Hostname: node3.mydomain.lab
Uuid: 2b1273a4-667e-4925-af5e-00904988595a
State: Peer in Cluster (Connected)
Other names:
172.16.20.Z


volume status (same output Online Y --- for volumes vmstore and engine )
Status of volume: data
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick node1.mydomain.lab:/gluster_brick
s/data/data                                58734    0          Y      31586
Brick node2.mydomain.lab:/gluster_brick
s/data/data                                55148    0          Y      4317 
Brick node3.mydomain.lab:/gluster_brick
s/data/data                                57021    0          Y      5242 
Self-heal Daemon on localhost              N/A      N/A        Y      63170
Self-heal Daemon on node2.mydomain.lab  N/A      N/A        Y      4365 
Self-heal Daemon on node3.mydomain.lab  N/A      N/A        Y      5385
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5FXYQR5FDMICJTHP7FQ5X4MO4VNND4A/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PEWOZ5GSF45EHYVW6D5C3J26X5J5GBF3/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-04 Thread Strahil Nikolov via Users
Go to the host running the HostedEngine VM and dump the xml via virsh.Then 
power cycle the engine and check if it fixed the issue with the CPU.
Best Regards,Strahil Nikolov 
 
 
  On Wed, Aug 3, 2022 at 23:58, Jiří Sléžka wrote:   Dne 
8/3/22 v 03:06 Strahil Nikolov napsal(a):
> I think it's related to Compute -> Clusters -> Cluster Name -> Gluster Hooks
> 
> I think https://access.redhat.com/solutions/6644151 should solve the 
> problem (you can use a developer subscription to access it).

thanks, I really had 5 hook conflicts

/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select 
id,name,hook_status,content_type,conflict_status from gluster_hooks 
where conflict_status != 0";

                  id                  |        name        | 
hook_status | content_type | conflict_status
--++-+--+-
  517462b4-104d-40d1-ac94-3f8baea8e80b | 30samba-start.sh  | ENABLED 
  | TEXT        |              4
  d428056d-f6fd-4e56-a48a-ccbdd273b774 | 30samba-set.sh    | ENABLED 
  | TEXT        |              4
  a1d8857a-9378-42af-81a8-89a4c75eb52e | 30samba-stop.sh    | ENABLED 
  | TEXT        |              4
  af362bbf-d1ea-4d5e-ae07-492c7ce0966f | 29CTDBsetup.sh    | ENABLED 
  | TEXT        |              4
  d3bdf3df-13f1-48d8-92d9-03d09989516f | 29CTDB-teardown.sh | ENABLED 
  | TEXT        |              4
(5 rows)

I removed them and then synced gluster hooks in cluster

Also diagnostic step

rpm -qV glusterfs-server

revealed that on one of hosts are some hooks missing

[root@ovirt-hci01 ~]# rpm -qV glusterfs-server
.M...  c /var/lib/glusterd/glusterd.info
missing    /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
missing    /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
missing    /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
missing    /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
missing    /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh

I reinstalled glusterfs-server package there

Well, I do this only to change CPU Type in cluster but now when Gluster 
services are checked and I try to change CPU Type I got

"Error while executing action: Cannot update cluster because the update 
triggered update of the VMs/Templates and it failed for the following: 
HostedEngine. To fix the issue, please go to each of them, edit, change 
the Custom Compatibility Version (or other fields changed previously in 
the cluster dialog) and press OK. If the save does not pass, fix the 
dialog validation. After successful cluster update, you can revert your 
Custom Compatibility Version change (or other changes). If the problem 
still persists, you may refer to the engine.log file for further details."

Strange thing and probably bug - this action disables Gluster services 
checkbox in cluster!!! Will try to report it...

Also I have no idea what is wrong with HostedEngine as there is (as I 
can see) no custom settings on it... but I cannot change for example 
memory on it because "There was an attempt to change Hosted Engine VM 
values that are locked."

2022-08-03 22:32:01,436+02 WARN 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-3193) 
[b93958d9-b27d-4f1b-97f0-d78312c2d346] Validation of action 'UpdateVm' 
failed for user admin@internal-authz. Reasons: 
VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD


Cheers,

Jiri

> 
> Best Regards,
> Strahil Nikolov
> 
>    On Wed, Aug 3, 2022 at 1:51, Jiří Sléžka
>     wrote:
>    ___
>    Users mailing list -- users@ovirt.org 
>    To unsubscribe send an email to users-le...@ovirt.org
>    
>    Privacy Statement: https://www.ovirt.org/privacy-policy.html
>    
>    oVirt Code of Conduct:
>    https://www.ovirt.org/community/about/community-guidelines/
>    
>    List Archives:
>    
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/HNGTNEBDBB2GWBYGHSIGNVIUGL4EFWT5/
>    
>
> 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUD4TF6MT33PNGVCQPLHSABJ3VKX64YG/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-03 Thread Jiří Sléžka

Dne 8/3/22 v 03:06 Strahil Nikolov napsal(a):

I think it's related to Compute -> Clusters -> Cluster Name -> Gluster Hooks

I think https://access.redhat.com/solutions/6644151 should solve the 
problem (you can use a developer subscription to access it).


thanks, I really had 5 hook conflicts

/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select 
id,name,hook_status,content_type,conflict_status from gluster_hooks 
where conflict_status != 0";


  id  |name| 
hook_status | content_type | conflict_status

--++-+--+-
 517462b4-104d-40d1-ac94-3f8baea8e80b | 30samba-start.sh   | ENABLED 
  | TEXT |   4
 d428056d-f6fd-4e56-a48a-ccbdd273b774 | 30samba-set.sh | ENABLED 
  | TEXT |   4
 a1d8857a-9378-42af-81a8-89a4c75eb52e | 30samba-stop.sh| ENABLED 
  | TEXT |   4
 af362bbf-d1ea-4d5e-ae07-492c7ce0966f | 29CTDBsetup.sh | ENABLED 
  | TEXT |   4
 d3bdf3df-13f1-48d8-92d9-03d09989516f | 29CTDB-teardown.sh | ENABLED 
  | TEXT |   4

(5 rows)

I removed them and then synced gluster hooks in cluster

Also diagnostic step

rpm -qV glusterfs-server

revealed that on one of hosts are some hooks missing

[root@ovirt-hci01 ~]# rpm -qV glusterfs-server
.M...  c /var/lib/glusterd/glusterd.info
missing /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
missing /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
missing /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
missing /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
missing /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh

I reinstalled glusterfs-server package there

Well, I do this only to change CPU Type in cluster but now when Gluster 
services are checked and I try to change CPU Type I got


"Error while executing action: Cannot update cluster because the update 
triggered update of the VMs/Templates and it failed for the following: 
HostedEngine. To fix the issue, please go to each of them, edit, change 
the Custom Compatibility Version (or other fields changed previously in 
the cluster dialog) and press OK. If the save does not pass, fix the 
dialog validation. After successful cluster update, you can revert your 
Custom Compatibility Version change (or other changes). If the problem 
still persists, you may refer to the engine.log file for further details."


Strange thing and probably bug - this action disables Gluster services 
checkbox in cluster!!! Will try to report it...


Also I have no idea what is wrong with HostedEngine as there is (as I 
can see) no custom settings on it... but I cannot change for example 
memory on it because "There was an attempt to change Hosted Engine VM 
values that are locked."


2022-08-03 22:32:01,436+02 WARN 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-3193) 
[b93958d9-b27d-4f1b-97f0-d78312c2d346] Validation of action 'UpdateVm' 
failed for user admin@internal-authz. Reasons: 
VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD



Cheers,

Jiri



Best Regards,
Strahil Nikolov

On Wed, Aug 3, 2022 at 1:51, Jiří Sléžka
 wrote:
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/HNGTNEBDBB2GWBYGHSIGNVIUGL4EFWT5/







smime.p7s
Description: Elektronicky podpis S/MIME
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUD4TF6MT33PNGVCQPLHSABJ3VKX64YG/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-02 Thread Strahil Nikolov via Users
I think it's related to Compute -> Clusters -> Cluster Name -> Gluster Hooks
I think https://access.redhat.com/solutions/6644151 should solve the problem 
(you can use a developer subscription to access it).
Best Regards,Strahil Nikolov 
 
 
  On Wed, Aug 3, 2022 at 1:51, Jiří Sléžka wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HNGTNEBDBB2GWBYGHSIGNVIUGL4EFWT5/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B3O3E6PGNXL75TCUMOSSLM6LGQTZQHCK/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-02 Thread Jiří Sléžka
but some webhook is registered on host... ovirt-hci.mch.local is 
resolvable (through /etc/hosts)


[root@ovirt-hci01 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-hci.mch.local:80/ovirt-engine/services/glusterevents

+---+-+---+
|NODE   | NODE STATUS | GLUSTEREVENTSD STATUS |
+---+-+---+
| 10.0.4.12 |  UP |OK |
| 10.0.4.13 |  UP |OK |
| localhost |  UP |OK |
+---+-+---+

Jiri


Dne 8/3/22 v 00:38 Jiří Sléžka napsal(a):

Dne 7/23/22 v 23:53 Strahil Nikolov napsal(a):
Did you identify any errors in the Engine log that could provide any 
clue ?


unfortunately no.

but funny thing... today I looked into html source of cluster settings 
page (via Firefox's web developer console). Gluster checkbox has this 
html code


id="ClusterPopupView_enableGlusterService" tabindex="17" 
style="vertical-align: top;" disabled="">


when I edited and removed disabled="" part, I was able check that 
checkbox. After pressing Ok everything seems to be set but there are 
finally three relevant errors in the engine.log


2022-08-03 00:22:36,795+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6f4a736] Could not sync webhooks to gluster server 
'ovirt-hci03.mch.local': null
2022-08-03 00:22:37,842+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6471654f] Could not sync webhooks to gluster server 
'ovirt-hci01.mch.local': null
2022-08-03 00:22:39,051+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6970bf5a] Could not sync webhooks to gluster server 
'ovirt-hci02.mch.local': null


Any idea why?

few lines before first error

2022-08-03 00:22:36,501+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(default task-2701) [7078c5b6] START, 
GlusterServersListVDSCommand(HostName = ovirt-hci01.mch.local, 
VdsIdVDSCommandParametersBase:{hostId='41722608-413e--a8bb-08ad783ec186'}), 
log id: 6083708e
2022-08-03 00:22:36,616+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(default task-2701) [7078c5b6] FINISH, GlusterServersListVDSCommand, 
return: [10.0.3.51/24:CONNECTED, 10.0.4.12:CONNECTED, 
10.0.4.13:CONNECTED], log id: 6083708e
2022-08-03 00:22:36,619+02 INFO 
[org.ovirt.engine.core.bll.gluster.AddGlusterWebhookInternalCommand] 
(default task-2701) [6f4a736] Running command: 
AddGlusterWebhookInternalCommand internal: true. Entities affected : ID: 
d03909e7-aca1-496c-9ff6-4a513c961ae3 Type: Cluster
2022-08-03 00:22:36,624+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.AddGlusterWebhookVDSCommand] 
(default task-2701) [6f4a736] START, 
AddGlusterWebhookVDSCommand(HostName = ovirt-hci01.mch.local, 
GlusterWebhookVDSParameters:{hostId='41722608-413e--a8bb-08ad783ec186'}), 
log id: 1dfd0a44
2022-08-03 00:22:36,793+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.AddGlusterWebhookVDSCommand] 
(default task-2701) [6f4a736] FINISH, AddGlusterWebhookVDSCommand, 
return: , log id: 1dfd0a44
2022-08-03 00:22:36,795+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6f4a736] Could not sync webhooks to gluster server 
'ovirt-hci03.mch.local': null


Cheers,

Jiri




Best Regards,
Strahil Nikolov

    On Wed, Jul 20, 2022 at 16:15, Jiří Sléžka
     wrote:
    On 7/19/22 22:40, Strahil Nikolov wrote:
 > Then, just ensure that the glusterd.service is enabled on all
    hosts and
 > leave it as it is.
 >
 > If it worries you, you will have to move one of the hosts in 
another

 > cluster (probably a new one) and slowly migrate the VMs from the
    old to
 > the new one.
 > Yet, if you use only 3 hosts that can put your VMs in risk (new
    cluster
 > having a single host could lead to downtimes).

    well, it blocks me from any changes on cluster so it is serious
    problem... but personally I don't like this "new cluster and 
migration"

    approach :-(

 > To be honest, I wouldn't change DB if it's a productive cluster.
    If you
 > decide to go that one -> make an engine backup before that.

    Would anyone from oVirt/gluster developers have a look?

    Thanks in advance,

    Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 >
 >
 >
 >
 >
 >    On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka
 >    mailto:jiri.sle...@slu.cz>> wrote:
 >    On 7/16/22 07:53, Strahil Nikolov wrote:
 >      > Try first with a single host. Set it into maintenance and
    check
 >    if the
 >      > checkmark is available.
 >
 >    setting single host to maintenance didn't change state of the
    gluster
 >    services checkbox in cluster settings.
 >
 >      > If not, try to 'reinstall' (UI, Hosts, Installation,
    Reinstall) the
  

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-02 Thread Jiří Sléžka

Dne 7/23/22 v 23:53 Strahil Nikolov napsal(a):

Did you identify any errors in the Engine log that could provide any clue ?


unfortunately no.

but funny thing... today I looked into html source of cluster settings 
page (via Firefox's web developer console). Gluster checkbox has this 
html code


id="ClusterPopupView_enableGlusterService" tabindex="17" 
style="vertical-align: top;" disabled="">


when I edited and removed disabled="" part, I was able check that 
checkbox. After pressing Ok everything seems to be set but there are 
finally three relevant errors in the engine.log


2022-08-03 00:22:36,795+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6f4a736] Could not sync webhooks to gluster server 
'ovirt-hci03.mch.local': null
2022-08-03 00:22:37,842+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6471654f] Could not sync webhooks to gluster server 
'ovirt-hci01.mch.local': null
2022-08-03 00:22:39,051+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6970bf5a] Could not sync webhooks to gluster server 
'ovirt-hci02.mch.local': null


Any idea why?

few lines before first error

2022-08-03 00:22:36,501+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(default task-2701) [7078c5b6] START, 
GlusterServersListVDSCommand(HostName = ovirt-hci01.mch.local, 
VdsIdVDSCommandParametersBase:{hostId='41722608-413e--a8bb-08ad783ec186'}), 
log id: 6083708e
2022-08-03 00:22:36,616+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
(default task-2701) [7078c5b6] FINISH, GlusterServersListVDSCommand, 
return: [10.0.3.51/24:CONNECTED, 10.0.4.12:CONNECTED, 
10.0.4.13:CONNECTED], log id: 6083708e
2022-08-03 00:22:36,619+02 INFO 
[org.ovirt.engine.core.bll.gluster.AddGlusterWebhookInternalCommand] 
(default task-2701) [6f4a736] Running command: 
AddGlusterWebhookInternalCommand internal: true. Entities affected : 
ID: d03909e7-aca1-496c-9ff6-4a513c961ae3 Type: Cluster
2022-08-03 00:22:36,624+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.AddGlusterWebhookVDSCommand] 
(default task-2701) [6f4a736] START, 
AddGlusterWebhookVDSCommand(HostName = ovirt-hci01.mch.local, 
GlusterWebhookVDSParameters:{hostId='41722608-413e--a8bb-08ad783ec186'}), 
log id: 1dfd0a44
2022-08-03 00:22:36,793+02 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.AddGlusterWebhookVDSCommand] 
(default task-2701) [6f4a736] FINISH, AddGlusterWebhookVDSCommand, 
return: , log id: 1dfd0a44
2022-08-03 00:22:36,795+02 ERROR 
[org.ovirt.engine.core.bll.InitGlusterCommandHelper] (default task-2701) 
[6f4a736] Could not sync webhooks to gluster server 
'ovirt-hci03.mch.local': null


Cheers,

Jiri




Best Regards,
Strahil Nikolov

On Wed, Jul 20, 2022 at 16:15, Jiří Sléžka
 wrote:
On 7/19/22 22:40, Strahil Nikolov wrote:
 > Then, just ensure that the glusterd.service is enabled on all
hosts and
 > leave it as it is.
 >
 > If it worries you, you will have to move one of the hosts in another
 > cluster (probably a new one) and slowly migrate the VMs from the
old to
 > the new one.
 > Yet, if you use only 3 hosts that can put your VMs in risk (new
cluster
 > having a single host could lead to downtimes).

well, it blocks me from any changes on cluster so it is serious
problem... but personally I don't like this "new cluster and migration"
approach :-(

 > To be honest, I wouldn't change DB if it's a productive cluster.
If you
 > decide to go that one -> make an engine backup before that.

Would anyone from oVirt/gluster developers have a look?

Thanks in advance,

Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 >
 >
 >
 >
 >
 >    On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka
 >    mailto:jiri.sle...@slu.cz>> wrote:
 >    On 7/16/22 07:53, Strahil Nikolov wrote:
 >      > Try first with a single host. Set it into maintenance and
check
 >    if the
 >      > checkmark is available.
 >
 >    setting single host to maintenance didn't change state of the
gluster
 >    services checkbox in cluster settings.
 >
 >      > If not, try to 'reinstall' (UI, Hosts, Installation,
Reinstall) the
 >      > host. During the setup, it should give you to update if
the host
 >    can run
 >      > the HE and it should allow you to select the checkmark for
Gluster.
 >
 >    well, in my oVirt install there is no way to setup glusterfs
services
 >    during host reinstall. There are only choices to configure
firewall,
 >    activate host after install, reboot host after install and
 >    deploy/undeploy hosted engine...
 >
 >    I think that gluster related stuff is installed automatically
as it is
 >    configured on cluster level (where in my case are gluster services
  

[ovirt-users] Re: Gluster volume "deleted" by accident --- Is it possible to recover?

2022-08-01 Thread itforums51
Thanks for the reply Strahil ... In the end I decided to run a full clean 
install of 4.5.1 instead ...

I will see if I can replicate the problem in a lab and follow your 
recommendations for learning purposes.

Thanks once again for the reply, much appreciated.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDPMOJKK3TJ7LXRVKNYL7DRM2THUHRMJ/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-23 Thread Strahil Nikolov via Users
By the way, have you tried to set each host into maintenanceand then 
'Reinstall' from the Admin Portal ?
Best Regards,Strahil Nikolov 
 
 
  On Sun, Jul 24, 2022 at 0:53, Strahil Nikolov wrote:   
Did you identify any errors in the Engine log that could provide any clue ?
Best Regards,Strahil Nikolov 
 
 
  On Wed, Jul 20, 2022 at 16:15, Jiří Sléžka wrote:   On 
7/19/22 22:40, Strahil Nikolov wrote:
> Then, just ensure that the glusterd.service is enabled on all hosts and 
> leave it as it is.
> 
> If it worries you, you will have to move one of the hosts in another 
> cluster (probably a new one) and slowly migrate the VMs from the old to 
> the new one.
> Yet, if you use only 3 hosts that can put your VMs in risk (new cluster 
> having a single host could lead to downtimes).

well, it blocks me from any changes on cluster so it is serious 
problem... but personally I don't like this "new cluster and migration" 
approach :-(

> To be honest, I wouldn't change DB if it's a productive cluster. If you 
> decide to go that one -> make an engine backup before that.

Would anyone from oVirt/gluster developers have a look?

Thanks in advance,

Jiri

> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
>    On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka
>     wrote:
>    On 7/16/22 07:53, Strahil Nikolov wrote:
>      > Try first with a single host. Set it into maintenance and check
>    if the
>      > checkmark is available.
> 
>    setting single host to maintenance didn't change state of the gluster
>    services checkbox in cluster settings.
> 
>      > If not, try to 'reinstall' (UI, Hosts, Installation, Reinstall) the
>      > host. During the setup, it should give you to update if the host
>    can run
>      > the HE and it should allow you to select the checkmark for Gluster.
> 
>    well, in my oVirt install there is no way to setup glusterfs services
>    during host reinstall. There are only choices to configure firewall,
>    activate host after install, reboot host after install and
>    deploy/undeploy hosted engine...
> 
>    I think that gluster related stuff is installed automatically as it is
>    configured on cluster level (where in my case are gluster services
>    disabled).
> 
>      > Let's work with a single node before being so drastic and
>    outage-ing a
>      > cluster.
> 
> 
>    Cheers,
> 
>    Jiri
> 
>      >
>      > Best Regards,
>      > Strahil Nikolov
>      >
>      >    On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka
>      >    mailto:jiri.sle...@slu.cz>> wrote:
>      >    Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
>      >      > Go to the UI, select the volume , pres 'Start' and mark the
>      >    checkbox for
>      >      > 'Force'-fully start .
>      >
>      >    well, it worked :-) Now all bricks are in UP state. In fact from
>      >    commandline point of view all volumes were active and all
>    bricks up all
>      >    the time.
>      >
>      >      > At least it should update the engine that everything is
>    running .
>      >      > Have you checked if the checkmark for the Gluster service is
>      >    available
>      >      > if you set the Host into maintenance?
>      >
>      >    which host do you mean? If all hosts in the cluster I have to
>    plan an
>      >    outage... will try...
>      >
>      >    Thanks,
>      >
>      >    Jiri
>      >
>      >      >
>      >      > Best Regards,
>      >      > Strahil Nikolov
>      >      >
>      >      >    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
>      >      >    mailto:jiri.sle...@slu.cz>
>    >> wrote:
>      >      >    ___
>      >      >    Users mailing list -- users@ovirt.org
>         >
>      >    
>    >>
>      >      >    To unsubscribe send an email to users-le...@ovirt.org
>    
>      >    >
>      >
>      >      >             >>
>      >      >    Privacy Statement:
>    https://www.ovirt.org/privacy-policy.html
>    
>      >        >
>      >      >        
>      >        >>
>      >      >    oVirt Code of Conduct:
>      >      >
>    https://www.ovirt.org/community/about/community-guidelines/
>    
>      >    

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-23 Thread Strahil Nikolov via Users
Did you identify any errors in the Engine log that could provide any clue ?
Best Regards,Strahil Nikolov 
 
 
  On Wed, Jul 20, 2022 at 16:15, Jiří Sléžka wrote:   On 
7/19/22 22:40, Strahil Nikolov wrote:
> Then, just ensure that the glusterd.service is enabled on all hosts and 
> leave it as it is.
> 
> If it worries you, you will have to move one of the hosts in another 
> cluster (probably a new one) and slowly migrate the VMs from the old to 
> the new one.
> Yet, if you use only 3 hosts that can put your VMs in risk (new cluster 
> having a single host could lead to downtimes).

well, it blocks me from any changes on cluster so it is serious 
problem... but personally I don't like this "new cluster and migration" 
approach :-(

> To be honest, I wouldn't change DB if it's a productive cluster. If you 
> decide to go that one -> make an engine backup before that.

Would anyone from oVirt/gluster developers have a look?

Thanks in advance,

Jiri

> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
>    On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka
>     wrote:
>    On 7/16/22 07:53, Strahil Nikolov wrote:
>      > Try first with a single host. Set it into maintenance and check
>    if the
>      > checkmark is available.
> 
>    setting single host to maintenance didn't change state of the gluster
>    services checkbox in cluster settings.
> 
>      > If not, try to 'reinstall' (UI, Hosts, Installation, Reinstall) the
>      > host. During the setup, it should give you to update if the host
>    can run
>      > the HE and it should allow you to select the checkmark for Gluster.
> 
>    well, in my oVirt install there is no way to setup glusterfs services
>    during host reinstall. There are only choices to configure firewall,
>    activate host after install, reboot host after install and
>    deploy/undeploy hosted engine...
> 
>    I think that gluster related stuff is installed automatically as it is
>    configured on cluster level (where in my case are gluster services
>    disabled).
> 
>      > Let's work with a single node before being so drastic and
>    outage-ing a
>      > cluster.
> 
> 
>    Cheers,
> 
>    Jiri
> 
>      >
>      > Best Regards,
>      > Strahil Nikolov
>      >
>      >    On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka
>      >    mailto:jiri.sle...@slu.cz>> wrote:
>      >    Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
>      >      > Go to the UI, select the volume , pres 'Start' and mark the
>      >    checkbox for
>      >      > 'Force'-fully start .
>      >
>      >    well, it worked :-) Now all bricks are in UP state. In fact from
>      >    commandline point of view all volumes were active and all
>    bricks up all
>      >    the time.
>      >
>      >      > At least it should update the engine that everything is
>    running .
>      >      > Have you checked if the checkmark for the Gluster service is
>      >    available
>      >      > if you set the Host into maintenance?
>      >
>      >    which host do you mean? If all hosts in the cluster I have to
>    plan an
>      >    outage... will try...
>      >
>      >    Thanks,
>      >
>      >    Jiri
>      >
>      >      >
>      >      > Best Regards,
>      >      > Strahil Nikolov
>      >      >
>      >      >    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
>      >      >    mailto:jiri.sle...@slu.cz>
>    >> wrote:
>      >      >    ___
>      >      >    Users mailing list -- users@ovirt.org
>         >
>      >    
>    >>
>      >      >    To unsubscribe send an email to users-le...@ovirt.org
>    
>      >    >
>      >
>      >      >             >>
>      >      >    Privacy Statement:
>    https://www.ovirt.org/privacy-policy.html
>    
>      >        >
>      >      >        
>      >        >>
>      >      >    oVirt Code of Conduct:
>      >      >
>    https://www.ovirt.org/community/about/community-guidelines/
>    
>      >        >
>      >      >  
>        

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-20 Thread Jiří Sléžka

On 7/19/22 22:40, Strahil Nikolov wrote:
Then, just ensure that the glusterd.service is enabled on all hosts and 
leave it as it is.


If it worries you, you will have to move one of the hosts in another 
cluster (probably a new one) and slowly migrate the VMs from the old to 
the new one.
Yet, if you use only 3 hosts that can put your VMs in risk (new cluster 
having a single host could lead to downtimes).


well, it blocks me from any changes on cluster so it is serious 
problem... but personally I don't like this "new cluster and migration" 
approach :-(


To be honest, I wouldn't change DB if it's a productive cluster. If you 
decide to go that one -> make an engine backup before that.


Would anyone from oVirt/gluster developers have a look?

Thanks in advance,

Jiri



Best Regards,
Strahil Nikolov





On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka
 wrote:
On 7/16/22 07:53, Strahil Nikolov wrote:
 > Try first with a single host. Set it into maintenance and check
if the
 > checkmark is available.

setting single host to maintenance didn't change state of the gluster
services checkbox in cluster settings.

 > If not, try to 'reinstall' (UI, Hosts, Installation, Reinstall) the
 > host. During the setup, it should give you to update if the host
can run
 > the HE and it should allow you to select the checkmark for Gluster.

well, in my oVirt install there is no way to setup glusterfs services
during host reinstall. There are only choices to configure firewall,
activate host after install, reboot host after install and
deploy/undeploy hosted engine...

I think that gluster related stuff is installed automatically as it is
configured on cluster level (where in my case are gluster services
disabled).

 > Let's work with a single node before being so drastic and
outage-ing a
 > cluster.


Cheers,

Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka
 >    mailto:jiri.sle...@slu.cz>> wrote:
 >    Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
 >      > Go to the UI, select the volume , pres 'Start' and mark the
 >    checkbox for
 >      > 'Force'-fully start .
 >
 >    well, it worked :-) Now all bricks are in UP state. In fact from
 >    commandline point of view all volumes were active and all
bricks up all
 >    the time.
 >
 >      > At least it should update the engine that everything is
running .
 >      > Have you checked if the checkmark for the Gluster service is
 >    available
 >      > if you set the Host into maintenance?
 >
 >    which host do you mean? If all hosts in the cluster I have to
plan an
 >    outage... will try...
 >
 >    Thanks,
 >
 >    Jiri
 >
 >      >
 >      > Best Regards,
 >      > Strahil Nikolov
 >      >
 >      >    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
 >      >    mailto:jiri.sle...@slu.cz>
>> wrote:
 >      >    ___
 >      >    Users mailing list -- users@ovirt.org
 >
 >    
>>
 >      >    To unsubscribe send an email to users-le...@ovirt.org

 >    >
 >
 >      >     >>
 >      >    Privacy Statement:
https://www.ovirt.org/privacy-policy.html

 >    >
 >      >    
 >    >>
 >      >    oVirt Code of Conduct:
 >      >
https://www.ovirt.org/community/about/community-guidelines/

 >    >
 >      >   

 >    >>
 >      >    List Archives:
 >      >
 >

https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-19 Thread Strahil Nikolov via Users
Then, just ensure that the glusterd.service is enabled on all hosts and leave 
it as it is.
If it worries you, you will have to move one of the hosts in another cluster 
(probably a new one) and slowly migrate the VMs from the old to the new 
one.Yet, if you use only 3 hosts that can put your VMs in risk (new cluster 
having a single host could lead to downtimes).
To be honest, I wouldn't change DB if it's a productive cluster. If you decide 
to go that one -> make an engine backup before that.
Best Regards,Strahil Nikolov 



 
 
  On Tue, Jul 19, 2022 at 12:25, Jiří Sléžka wrote:   On 
7/16/22 07:53, Strahil Nikolov wrote:
> Try first with a single host. Set it into maintenance and check if the 
> checkmark is available.

setting single host to maintenance didn't change state of the gluster 
services checkbox in cluster settings.

> If not, try to 'reinstall' (UI, Hosts, Installation, Reinstall) the 
> host. During the setup, it should give you to update if the host can run 
> the HE and it should allow you to select the checkmark for Gluster.

well, in my oVirt install there is no way to setup glusterfs services 
during host reinstall. There are only choices to configure firewall, 
activate host after install, reboot host after install and 
deploy/undeploy hosted engine...

I think that gluster related stuff is installed automatically as it is 
configured on cluster level (where in my case are gluster services 
disabled).

> Let's work with a single node before being so drastic and outage-ing a 
> cluster.


Cheers,

Jiri

> 
> Best Regards,
> Strahil Nikolov
> 
>    On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka
>     wrote:
>    Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
>      > Go to the UI, select the volume , pres 'Start' and mark the
>    checkbox for
>      > 'Force'-fully start .
> 
>    well, it worked :-) Now all bricks are in UP state. In fact from
>    commandline point of view all volumes were active and all bricks up all
>    the time.
> 
>      > At least it should update the engine that everything is running .
>      > Have you checked if the checkmark for the Gluster service is
>    available
>      > if you set the Host into maintenance?
> 
>    which host do you mean? If all hosts in the cluster I have to plan an
>    outage... will try...
> 
>    Thanks,
> 
>    Jiri
> 
>      >
>      > Best Regards,
>      > Strahil Nikolov
>      >
>      >    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
>      >    mailto:jiri.sle...@slu.cz>> wrote:
>      >    ___
>      >    Users mailing list -- users@ovirt.org 
>    >
>      >    To unsubscribe send an email to users-le...@ovirt.org
>    
> 
>      >    >
>      >    Privacy Statement: https://www.ovirt.org/privacy-policy.html
>    
>      >        >
>      >    oVirt Code of Conduct:
>      > https://www.ovirt.org/community/about/community-guidelines/
>    
>      >        >
>      >    List Archives:
>      >
>    
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
>    
>
>      >  
>    
>    
>>
>      >
> 

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPORMHV6GU3VVKK4LTFJEYR27B76ZEWB/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XDWDFPLU3TNKROXIH6DLDR7QY2IQRPU/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-19 Thread Jiří Sléžka

On 7/16/22 07:53, Strahil Nikolov wrote:
Try first with a single host. Set it into maintenance and check if the 
checkmark is available.


setting single host to maintenance didn't change state of the gluster 
services checkbox in cluster settings.


If not, try to 'reinstall' (UI, Hosts, Installation, Reinstall) the 
host. During the setup, it should give you to update if the host can run 
the HE and it should allow you to select the checkmark for Gluster.


well, in my oVirt install there is no way to setup glusterfs services 
during host reinstall. There are only choices to configure firewall, 
activate host after install, reboot host after install and 
deploy/undeploy hosted engine...


I think that gluster related stuff is installed automatically as it is 
configured on cluster level (where in my case are gluster services 
disabled).


Let's work with a single node before being so drastic and outage-ing a 
cluster.



Cheers,

Jiri



Best Regards,
Strahil Nikolov

On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka
 wrote:
Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
 > Go to the UI, select the volume , pres 'Start' and mark the
checkbox for
 > 'Force'-fully start .

well, it worked :-) Now all bricks are in UP state. In fact from
commandline point of view all volumes were active and all bricks up all
the time.

 > At least it should update the engine that everything is running .
 > Have you checked if the checkmark for the Gluster service is
available
 > if you set the Host into maintenance?

which host do you mean? If all hosts in the cluster I have to plan an
outage... will try...

Thanks,

Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
 >    mailto:jiri.sle...@slu.cz>> wrote:
 >    ___
 >    Users mailing list -- users@ovirt.org 
>
 >    To unsubscribe send an email to users-le...@ovirt.org


 >    >
 >    Privacy Statement: https://www.ovirt.org/privacy-policy.html

 >    >
 >    oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/

 >    >
 >    List Archives:
 >

https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/


 >   
>
 >





smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPORMHV6GU3VVKK4LTFJEYR27B76ZEWB/


[ovirt-users] Re: Gluster volume "deleted" by accident --- Is it possible to recover?

2022-07-17 Thread Strahil Nikolov via Users
Check if the cleanup has umounted the volume bricks.If they are still mounted, 
you can use a backup of the system to retrieve the definition of the gluster 
volumes (/var/lib/gluster). Once you copy the volume dir, stop glusterd (this 
is just management layer) on all nodes and then start them one by one. Keep in 
mind that nodes to sync between each other , so restarting the nodes 1 by 1 is 
useless.
You can also try to create the volume (keep the name) on the sane bricks via 
the force flag and just hope it will have the data in the bricks (never done 
that).
Best Regards,Strahil Nikolov 
 
 
hi everyone,

I have a 3x node ovirt 4.4.6 cluster in HC setup.

Today I was intending to extend the data and vmstore volume adding another 
brick each; then by accident I pressed the "cleanup" button. Basically it looks 
that the volume were deleted.

I am wondering whether there is a process of trying to recover these volumes 
and therefore all VMs (including the Hosted-Engine).

```
lvs
  LV                                VG              Attr      LSize  Pool       
                     Origin                          Data%  Meta%  Move Log 
Cpy%Sync Convert
  gluster_lv_data                    gluster_vg_sda4 Vwi---t--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                                
                        
  gluster_lv_data-brick1            gluster_vg_sda4 Vwi-aot--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                  0.45          
                        
  gluster_lv_engine                  gluster_vg_sda4 -wi-a- 100.00g         
                                                                                
                 
  gluster_lv_vmstore                gluster_vg_sda4 Vwi---t--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                                
                        
  gluster_lv_vmstore-brick1          gluster_vg_sda4 Vwi-aot--- 500.00g 
gluster_thinpool_gluster_vg_sda4                                  0.33          
                        
  gluster_thinpool_gluster_vg_sda4  gluster_vg_sda4 twi-aot---  <7.07t          
                                                        11.46  0.89
```  
I would appreciate any advice. 

TIA
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ITF5IYLWGG2MPAPG2JBD2GWA5QZDPVSA/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4OJOPZARTB7F3BCZ3NRSQMGHEJRJXDTC/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-15 Thread Strahil Nikolov via Users
Try first with a single host. Set it into maintenance and check if the 
checkmark is available.If not, try to 'reinstall' (UI, Hosts, Installation, 
Reinstall) the host. During the setup, it should give you to update if the host 
can run the HE and it should allow you to select the checkmark for Gluster.
Let's work with a single node before being so drastic and outage-ing a cluster.
Best Regards,Strahil Nikolov 
 
 
  On Thu, Jul 14, 2022 at 23:03, Jiří Sléžka wrote:   Dne 
7/14/22 v 21:21 Strahil Nikolov napsal(a):
> Go to the UI, select the volume , pres 'Start' and mark the checkbox for 
> 'Force'-fully start .

well, it worked :-) Now all bricks are in UP state. In fact from 
commandline point of view all volumes were active and all bricks up all 
the time.

> At least it should update the engine that everything is running .
> Have you checked if the checkmark for the Gluster service is available 
> if you set the Host into maintenance?

which host do you mean? If all hosts in the cluster I have to plan an 
outage... will try...

Thanks,

Jiri

> 
> Best Regards,
> Strahil Nikolov
> 
>    On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
>     wrote:
>    ___
>    Users mailing list -- users@ovirt.org 
>    To unsubscribe send an email to users-le...@ovirt.org
>    
>    Privacy Statement: https://www.ovirt.org/privacy-policy.html
>    
>    oVirt Code of Conduct:
>    https://www.ovirt.org/community/about/community-guidelines/
>    
>    List Archives:
>    
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
>    
>
> 

  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYJCDLL64JF4WT3W76J2ICD6SKGCVB7A/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-14 Thread Jiří Sléžka

Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
Go to the UI, select the volume , pres 'Start' and mark the checkbox for 
'Force'-fully start .


well, it worked :-) Now all bricks are in UP state. In fact from 
commandline point of view all volumes were active and all bricks up all 
the time.



At least it should update the engine that everything is running .
Have you checked if the checkmark for the Gluster service is available 
if you set the Host into maintenance?


which host do you mean? If all hosts in the cluster I have to plan an 
outage... will try...


Thanks,

Jiri



Best Regards,
Strahil Nikolov

On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka
 wrote:
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/







smime.p7s
Description: Elektronicky podpis S/MIME
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YEM4HECQBEMRZK2UGVZYVUAQBHYR4I2C/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-14 Thread Strahil Nikolov via Users
Go to the UI, select the volume , pres 'Start' and mark the checkbox for 
'Force'-fully start .
At least it should update the engine that everything is running .Have you 
checked if the checkmark for the Gluster service is available if you set the 
Host into maintenance?
Best Regards,Strahil Nikolov 
 
 
  On Thu, Jul 14, 2022 at 16:08, Jiří Sléžka wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/624NH3C5REFDV55K4NPKF6IU4IHG6FPK/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AYPNCACIG5F6QSINQPY7YHKI2I45N5WW/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-14 Thread Jiří Sléžka

On 7/14/22 14:30, Jiří Sléžka wrote:

On 7/14/22 00:34, Strahil Nikolov wrote:

Well... not yet.
Check if the engine detects the volumes and verify again that all 
glustereventsd work.


I would even consider restarting the engine, just to be on the safe side.


engine restarted (I also yum updated it before), glustereventsd is 
running on all hosts, selinux's port label is set. Still only one brick 
is up and two are in unknown state in manager. See the screenshot.



What is your oVirt version ? Maybe an update could solve your problem.


latest in 4.4 version -> 4.4.10.7-1.el8. Engine and hosts are Rocky 
Linux 8.6 based.


ale here is output of select * from gluster_volume_bricks; I dont know 
if it is relevant but it shows old (2022-06-12 14:32:46.476558+02) dates 
in _update_date filed in part of UNKNOWN state bricks. Also UP state 
bricks has timestamp i past (around 1 day), is it normal?


https://pastebin.com/hQnj1en3

Could you suggest some relevant strings in engine.log | vdsm.log I could 
looks for? My blind grepping just reveleated


grep "GLUSTER" /var/log/ovirt-engine/engine.log

2022-07-13 13:10:19,263+02 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-3) [] EVENT_ID: GLUSTER_BRICK_STATUS_CHANGED(4,086), 
Detected change in status of brick 
10.0.4.11:/gluster_bricks/engine/engine of volume engine of cluster 
McHosting from UNKNOWN to UP via gluster event.
2022-07-13 13:10:19,492+02 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-3) [] EVENT_ID: GLUSTER_BRICK_STATUS_CHANGED(4,086), 
Detected change in status of brick 10.0.4.11:/gluster_bricks/vms/vms of 
volume vms of cluster McHosting from UNKNOWN to UP via gluster event.
2022-07-13 13:10:21,185+02 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-3) [] EVENT_ID: GLUSTER_BRICK_STATUS_CHANGED(4,086), 
Detected change in status of brick 10.0.4.11:/gluster_bricks/vms2/vms2 
of volume vms of cluster McHosting from UNKNOWN to UP via gluster event.


it match the engine log and also time I run glustereventsd service on 
host 10.0.4.11


but I see also repeatedly

2022-07-13 14:14:14,440+02 WARN 
[org.ovirt.engine.core.bll.UpdateClusterCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-88) 
[5fb948bc] Validation of action 'UpdateCluster' failed for user SYSTEM. 
Reasons: 
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES 



so it looks to me like there is some action running which wants disable 
gluster services on cluster and it cannot (and it is right!). But it 
probably blocks that gluster services checkbox in cluster settings in 
manager. What do you think?


sorry, date implies it is my yesterday's testing invoking of cluster 
edit window... there are no running jobs or tasks in db (as I can see)...


so still dont know what is wrong :-(

Cheers,

Jiri




Cheers, Jiri



Best Regards,
Strahil Nikolov

    On Wed, Jul 13, 2022 at 17:05, Jiří Sléžka
     wrote:
    On 7/13/22 14:53, Jiří Sléžka wrote:
 > On 7/12/22 22:28, Strahil Nikolov wrote:
 >> glustereventad will notify the engine when something changes -
    like a
 >> new volume is created from the cli (or bad things happened ;) ),
    so it
 >> should be running. >
 >> You can use the workaround from the github issue and reatart the
 >> glustereventsd service.
 >
 > ok, workaround applied, glustereventsd service enabled and
    started on
 > all hosts.
 >
 > I can see this log entry in volume Events
 >
 > Detected change in status of brick
 > 10.0.4.11:/gluster_bricks/engine/engine of volume engine of 
cluster

 > McHosting from UNKNOWN to UP via gluster event.
 >
 > but Bricks tab shows still two (.12 and .13) of three bricks in
    Unknown
 > state. From command line point of view all bricks are up and 
healthy.

 >
 > it looks like engine thinks that gluster service is disabled in
    cluster
 > but I cannot enable it because checkbox is disabled. In my 
other (FC

 > based) oVirt instance Gluster Service checkbox is not selected
    but not
 > disabled. So I am interested what could make that checkbox
    inactive...

    well, on db side it looks like cluster has gluster_service 
disabled...


    engine=# select virt_service, gluster_service from cluster;
   virt_service | gluster_service
    --+-
   t            | f
    (1 row)

    still don't know why the checkbox is disabled. Would it be safe to
    enabled gluster_service directly in db? I suppose no... :-)

    Cheers,

    Jiri


 >
 >> For the vdsm, you can always run
 >> '/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start' which is
    executed
 >> by the vdsmd.service before every start (ExecStartPre stanza)
    and see
 >> if it complains about something.
 >
 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-14 Thread Jiří Sléžka

On 7/14/22 00:34, Strahil Nikolov wrote:

Well... not yet.
Check if the engine detects the volumes and verify again that all 
glustereventsd work.


I would even consider restarting the engine, just to be on the safe side.


engine restarted (I also yum updated it before), glustereventsd is 
running on all hosts, selinux's port label is set. Still only one brick 
is up and two are in unknown state in manager. See the screenshot.



What is your oVirt version ? Maybe an update could solve your problem.


latest in 4.4 version -> 4.4.10.7-1.el8. Engine and hosts are Rocky 
Linux 8.6 based.


ale here is output of select * from gluster_volume_bricks; I dont know 
if it is relevant but it shows old (2022-06-12 14:32:46.476558+02) dates 
in _update_date filed in part of UNKNOWN state bricks. Also UP state 
bricks has timestamp i past (around 1 day), is it normal?


https://pastebin.com/hQnj1en3

Could you suggest some relevant strings in engine.log | vdsm.log I could 
looks for? My blind grepping just reveleated


grep "GLUSTER" /var/log/ovirt-engine/engine.log

2022-07-13 13:10:19,263+02 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-3) [] EVENT_ID: GLUSTER_BRICK_STATUS_CHANGED(4,086), 
Detected change in status of brick 
10.0.4.11:/gluster_bricks/engine/engine of volume engine of cluster 
McHosting from UNKNOWN to UP via gluster event.
2022-07-13 13:10:19,492+02 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-3) [] EVENT_ID: GLUSTER_BRICK_STATUS_CHANGED(4,086), 
Detected change in status of brick 10.0.4.11:/gluster_bricks/vms/vms of 
volume vms of cluster McHosting from UNKNOWN to UP via gluster event.
2022-07-13 13:10:21,185+02 WARN 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-3) [] EVENT_ID: GLUSTER_BRICK_STATUS_CHANGED(4,086), 
Detected change in status of brick 10.0.4.11:/gluster_bricks/vms2/vms2 
of volume vms of cluster McHosting from UNKNOWN to UP via gluster event.


it match the engine log and also time I run glustereventsd service on 
host 10.0.4.11


but I see also repeatedly

2022-07-13 14:14:14,440+02 WARN 
[org.ovirt.engine.core.bll.UpdateClusterCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-88) 
[5fb948bc] Validation of action 'UpdateCluster' failed for user SYSTEM. 
Reasons: 
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_DISABLE_GLUSTER_WHEN_CLUSTER_CONTAINS_VOLUMES


so it looks to me like there is some action running which wants disable 
gluster services on cluster and it cannot (and it is right!). But it 
probably blocks that gluster services checkbox in cluster settings in 
manager. What do you think?



Cheers, Jiri



Best Regards,
Strahil Nikolov

On Wed, Jul 13, 2022 at 17:05, Jiří Sléžka
 wrote:
On 7/13/22 14:53, Jiří Sléžka wrote:
 > On 7/12/22 22:28, Strahil Nikolov wrote:
 >> glustereventad will notify the engine when something changes -
like a
 >> new volume is created from the cli (or bad things happened ;) ),
so it
 >> should be running. >
 >> You can use the workaround from the github issue and reatart the
 >> glustereventsd service.
 >
 > ok, workaround applied, glustereventsd service enabled and
started on
 > all hosts.
 >
 > I can see this log entry in volume Events
 >
 > Detected change in status of brick
 > 10.0.4.11:/gluster_bricks/engine/engine of volume engine of cluster
 > McHosting from UNKNOWN to UP via gluster event.
 >
 > but Bricks tab shows still two (.12 and .13) of three bricks in
Unknown
 > state. From command line point of view all bricks are up and healthy.
 >
 > it looks like engine thinks that gluster service is disabled in
cluster
 > but I cannot enable it because checkbox is disabled. In my other (FC
 > based) oVirt instance Gluster Service checkbox is not selected
but not
 > disabled. So I am interested what could make that checkbox
inactive...

well, on db side it looks like cluster has gluster_service disabled...

engine=# select virt_service, gluster_service from cluster;
   virt_service | gluster_service
--+-
   t            | f
(1 row)

still don't know why the checkbox is disabled. Would it be safe to
enabled gluster_service directly in db? I suppose no... :-)

Cheers,

Jiri


 >
 >> For the vdsm, you can always run
 >> '/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start' which is
executed
 >> by the vdsmd.service before every start (ExecStartPre stanza)
and see
 >> if it complains about something.
 >
 > [root@ovirt-hci03  ~]#
/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start
 > vdsm: Running mkdirs
 > vdsm: Running configure_vdsm_logs
 > vdsm: Running run_init_hooks
 > vdsm: Running 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-13 Thread Strahil Nikolov via Users
Well... not yet.Check if the engine detects the volumes and verify again that 
all glustereventsd work.
I would even consider restarting the engine, just to be on the safe side.
What is your oVirt version ? Maybe an update could solve your problem.
Best Regards,Strahil Nikolov 
 
 
  On Wed, Jul 13, 2022 at 17:05, Jiří Sléžka wrote:   On 
7/13/22 14:53, Jiří Sléžka wrote:
> On 7/12/22 22:28, Strahil Nikolov wrote:
>> glustereventad will notify the engine when something changes - like a 
>> new volume is created from the cli (or bad things happened ;) ), so it 
>> should be running. >
>> You can use the workaround from the github issue and reatart the 
>> glustereventsd service.
> 
> ok, workaround applied, glustereventsd service enabled and started on 
> all hosts.
> 
> I can see this log entry in volume Events
> 
> Detected change in status of brick 
> 10.0.4.11:/gluster_bricks/engine/engine of volume engine of cluster 
> McHosting from UNKNOWN to UP via gluster event.
> 
> but Bricks tab shows still two (.12 and .13) of three bricks in Unknown 
> state. From command line point of view all bricks are up and healthy.
> 
> it looks like engine thinks that gluster service is disabled in cluster 
> but I cannot enable it because checkbox is disabled. In my other (FC 
> based) oVirt instance Gluster Service checkbox is not selected but not 
> disabled. So I am interested what could make that checkbox inactive...

well, on db side it looks like cluster has gluster_service disabled...

engine=# select virt_service, gluster_service from cluster;
  virt_service | gluster_service
--+-
  t            | f
(1 row)

still don't know why the checkbox is disabled. Would it be safe to 
enabled gluster_service directly in db? I suppose no... :-)

Cheers,

Jiri

> 
>> For the vdsm, you can always run 
>> '/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start' which is executed 
>> by the vdsmd.service before every start (ExecStartPre stanza) and see 
>> if it complains about something.
> 
> [root@ovirt-hci03 ~]# /usr/libexec/vdsm/vdsmd_init_common.sh --pre-start
> vdsm: Running mkdirs
> vdsm: Running configure_vdsm_logs
> vdsm: Running run_init_hooks
> vdsm: Running check_is_configured
> sanlock is configured for vdsm
> lvm is configured for vdsm
> abrt is already configured for vdsm
> Managed volume database is already configured
> Current revision of multipath.conf detected, preserving
> libvirt is already configured for vdsm
> vdsm: Running validate_configuration
> SUCCESS: ssl configured to true. No conflicts
> vdsm: Running prepare_transient_repository
> vdsm: Running syslog_available
> vdsm: Running nwfilter
> vdsm: Running dummybr
> vdsm: Running tune_system
> vdsm: Running test_space
> vdsm: Running test_lo
> 
> retcode 0, all looks ok...
> 
> Cheers,
> 
> Jiri
> 
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>     On Tue, Jul 12, 2022 at 11:12, Jiří Sléžka
>>      wrote:
>>     On 7/11/22 16:22, Jiří Sléžka wrote:
>>  > On 7/11/22 15:57, Strahil Nikolov wrote:
>>  >> Can you check for AVC denials and the error message like the
>>     described
>>  >> in
>>  >>
>>    
>> https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183
>>    
>> >  >?
>>  >
>>  > thanks for reply, there are two unrelated (qemu-kvm) avc denials
>>     logged
>>  > (related probably to sanlock recovery)
>>  >
>>  > also I cannot find glustereventsd in any related log... is it 
>> really
>>  > used by vdsm-gluster?
>>  >
>>  > this service runs on no hosts
>>  >
>>  > systemctl status glustereventsd
>>  > ● glustereventsd.service - Gluster Events Notifier
>>  >     Loaded: loaded 
>> (/usr/lib/systemd/system/glustereventsd.service;
>>  > disabled; vendor preset: disabled)
>>  >     Active: inactive (dead)
>>
>>     it looks like root of the problem is that Gluster service is
>>     disabled in
>>     cluster settings and cannot be enabled. But it was enabled before...
>>     also I have to manually install vdsm-gluster when I (re)install new
>>     host, but bricks from this host are in unknown state in admin. Maybe
>>     vdsm-gluster is not correctly configured? Maybe glustereventsd is not
>>     running? I am just guessing...
>>
>>     I have no access to other HCI installation so I cannot compare
>>     differences.
>>
>>     I would be really happy if someone could tell me what circumstances
>>     could disable Gluster service checkbox in admin and how to enable it
>>     again...
>>
>>     Cheers,
>>
>>     Jiri
>>
>>
>>  >
>>  > Cheers,
>>  >
>>  > Jiri
>>  >
>>  >
>>  >>
>>  >>
>>  >> Best Regards,
>>  >> Strahil Nikolov
>>  >>
>>  >>     On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka
>>  >>     mailto:jiri.sle...@slu.cz>> wrote:
>>  >>     Hello,
>>  >>
>>  >>     On 7/11/22 14:34, 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-13 Thread Jiří Sléžka

On 7/13/22 14:53, Jiří Sléžka wrote:

On 7/12/22 22:28, Strahil Nikolov wrote:
glustereventad will notify the engine when something changes - like a 
new volume is created from the cli (or bad things happened ;) ), so it 
should be running. >
You can use the workaround from the github issue and reatart the 
glustereventsd service.


ok, workaround applied, glustereventsd service enabled and started on 
all hosts.


I can see this log entry in volume Events

Detected change in status of brick 
10.0.4.11:/gluster_bricks/engine/engine of volume engine of cluster 
McHosting from UNKNOWN to UP via gluster event.


but Bricks tab shows still two (.12 and .13) of three bricks in Unknown 
state. From command line point of view all bricks are up and healthy.


it looks like engine thinks that gluster service is disabled in cluster 
but I cannot enable it because checkbox is disabled. In my other (FC 
based) oVirt instance Gluster Service checkbox is not selected but not 
disabled. So I am interested what could make that checkbox inactive...


well, on db side it looks like cluster has gluster_service disabled...

engine=# select virt_service, gluster_service from cluster;
 virt_service | gluster_service
--+-
 t| f
(1 row)

still don't know why the checkbox is disabled. Would it be safe to 
enabled gluster_service directly in db? I suppose no... :-)


Cheers,

Jiri



For the vdsm, you can always run 
'/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start' which is executed 
by the vdsmd.service before every start (ExecStartPre stanza) and see 
if it complains about something.


[root@ovirt-hci03 ~]# /usr/libexec/vdsm/vdsmd_init_common.sh --pre-start
vdsm: Running mkdirs
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
sanlock is configured for vdsm
lvm is configured for vdsm
abrt is already configured for vdsm
Managed volume database is already configured
Current revision of multipath.conf detected, preserving
libvirt is already configured for vdsm
vdsm: Running validate_configuration
SUCCESS: ssl configured to true. No conflicts
vdsm: Running prepare_transient_repository
vdsm: Running syslog_available
vdsm: Running nwfilter
vdsm: Running dummybr
vdsm: Running tune_system
vdsm: Running test_space
vdsm: Running test_lo

retcode 0, all looks ok...

Cheers,

Jiri




Best Regards,
Strahil Nikolov

    On Tue, Jul 12, 2022 at 11:12, Jiří Sléžka
     wrote:
    On 7/11/22 16:22, Jiří Sléžka wrote:
 > On 7/11/22 15:57, Strahil Nikolov wrote:
 >> Can you check for AVC denials and the error message like the
    described
 >> in
 >>

https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183

?
 >
 > thanks for reply, there are two unrelated (qemu-kvm) avc denials
    logged
 > (related probably to sanlock recovery)
 >
 > also I cannot find glustereventsd in any related log... is it 
really

 > used by vdsm-gluster?
 >
 > this service runs on no hosts
 >
 > systemctl status glustereventsd
 > ● glustereventsd.service - Gluster Events Notifier
 >     Loaded: loaded 
(/usr/lib/systemd/system/glustereventsd.service;

 > disabled; vendor preset: disabled)
 >     Active: inactive (dead)

    it looks like root of the problem is that Gluster service is
    disabled in
    cluster settings and cannot be enabled. But it was enabled before...
    also I have to manually install vdsm-gluster when I (re)install new
    host, but bricks from this host are in unknown state in admin. Maybe
    vdsm-gluster is not correctly configured? Maybe glustereventsd is not
    running? I am just guessing...

    I have no access to other HCI installation so I cannot compare
    differences.

    I would be really happy if someone could tell me what circumstances
    could disable Gluster service checkbox in admin and how to enable it
    again...

    Cheers,

    Jiri


 >
 > Cheers,
 >
 > Jiri
 >
 >
 >>
 >>
 >> Best Regards,
 >> Strahil Nikolov
 >>
 >>     On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka
 >>     mailto:jiri.sle...@slu.cz>> wrote:
 >>     Hello,
 >>
 >>     On 7/11/22 14:34, Strahil Nikolov wrote:
 >>  > Can you check something on the host:
 >>  > cat /etc/glusterfs/eventsconfig.json
 >>
 >>     cat /etc/glusterfs/eventsconfig.json
 >>     {
 >>      "log-level": "INFO",
 >>      "port": 24009,
 >>      "disable-events-log": false
 >>     }
 >>
 >>
 >>  > semanage port -l | grep $(awk -F ':' '/port/
    {gsub(",","",$2);
 >> print
 >>  > $2}' /etc/glusterfs/eventsconfig.json)
 >>
 >>     semanage port -l | grep 24009
 >>
 >>     returns empty set, it looks like this port is not labeled
 >>
 >>     Cheers,
 >>
 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-13 Thread Jiří Sléžka

On 7/12/22 22:28, Strahil Nikolov wrote:
glustereventad will notify the engine when something changes - like a 
new volume is created from the cli (or bad things happened ;) ), so it 
should be running. >
You can use the workaround from the github issue and reatart the 
glustereventsd service.


ok, workaround applied, glustereventsd service enabled and started on 
all hosts.


I can see this log entry in volume Events

Detected change in status of brick 
10.0.4.11:/gluster_bricks/engine/engine of volume engine of cluster 
McHosting from UNKNOWN to UP via gluster event.


but Bricks tab shows still two (.12 and .13) of three bricks in Unknown 
state. From command line point of view all bricks are up and healthy.


it looks like engine thinks that gluster service is disabled in cluster 
but I cannot enable it because checkbox is disabled. In my other (FC 
based) oVirt instance Gluster Service checkbox is not selected but not 
disabled. So I am interested what could make that checkbox inactive...


For the vdsm, you can always run '/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start' which is executed by the vdsmd.service before every start 
(ExecStartPre stanza) and see if it complains about something.


[root@ovirt-hci03 ~]# /usr/libexec/vdsm/vdsmd_init_common.sh --pre-start
vdsm: Running mkdirs
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
sanlock is configured for vdsm
lvm is configured for vdsm
abrt is already configured for vdsm
Managed volume database is already configured
Current revision of multipath.conf detected, preserving
libvirt is already configured for vdsm
vdsm: Running validate_configuration
SUCCESS: ssl configured to true. No conflicts
vdsm: Running prepare_transient_repository
vdsm: Running syslog_available
vdsm: Running nwfilter
vdsm: Running dummybr
vdsm: Running tune_system
vdsm: Running test_space
vdsm: Running test_lo

retcode 0, all looks ok...

Cheers,

Jiri




Best Regards,
Strahil Nikolov

On Tue, Jul 12, 2022 at 11:12, Jiří Sléžka
 wrote:
On 7/11/22 16:22, Jiří Sléžka wrote:
 > On 7/11/22 15:57, Strahil Nikolov wrote:
 >> Can you check for AVC denials and the error message like the
described
 >> in
 >>
https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183
?
 >
 > thanks for reply, there are two unrelated (qemu-kvm) avc denials
logged
 > (related probably to sanlock recovery)
 >
 > also I cannot find glustereventsd in any related log... is it really
 > used by vdsm-gluster?
 >
 > this service runs on no hosts
 >
 > systemctl status glustereventsd
 > ● glustereventsd.service - Gluster Events Notifier
 >     Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service;
 > disabled; vendor preset: disabled)
 >     Active: inactive (dead)

it looks like root of the problem is that Gluster service is
disabled in
cluster settings and cannot be enabled. But it was enabled before...
also I have to manually install vdsm-gluster when I (re)install new
host, but bricks from this host are in unknown state in admin. Maybe
vdsm-gluster is not correctly configured? Maybe glustereventsd is not
running? I am just guessing...

I have no access to other HCI installation so I cannot compare
differences.

I would be really happy if someone could tell me what circumstances
could disable Gluster service checkbox in admin and how to enable it
again...

Cheers,

Jiri


 >
 > Cheers,
 >
 > Jiri
 >
 >
 >>
 >>
 >> Best Regards,
 >> Strahil Nikolov
 >>
 >>     On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka
 >>     mailto:jiri.sle...@slu.cz>> wrote:
 >>     Hello,
 >>
 >>     On 7/11/22 14:34, Strahil Nikolov wrote:
 >>  > Can you check something on the host:
 >>  > cat /etc/glusterfs/eventsconfig.json
 >>
 >>     cat /etc/glusterfs/eventsconfig.json
 >>     {
 >>      "log-level": "INFO",
 >>      "port": 24009,
 >>      "disable-events-log": false
 >>     }
 >>
 >>
 >>  > semanage port -l | grep $(awk -F ':' '/port/
{gsub(",","",$2);
 >> print
 >>  > $2}' /etc/glusterfs/eventsconfig.json)
 >>
 >>     semanage port -l | grep 24009
 >>
 >>     returns empty set, it looks like this port is not labeled
 >>
 >>     Cheers,
 >>
 >>     Jiri
 >>
 >>  >
 >>  > Best Regards,
 >>  > Strahil Nikolov
 >>  > В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří
Sléžka
 >>  > mailto:jiri.sle...@slu.cz>
>> написа:
 >>  >
 >>  >
 >>  > Hi,
 >>  >
 >>  > I would like to change CPU Type in my oVirt 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-12 Thread Strahil Nikolov via Users
glustereventad will notify the engine when something changes - like a new 
volume is created from the cli (or bad things happened ;) ), so it should be 
running.
 You can use the workaround from the github issue and reatart the 
glustereventsd service.
For the vdsm, you can always run '/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start' which is executed by the vdsmd.service before every start 
(ExecStartPre stanza) and see if it complains about something.

Best Regards,Strahil Nikolov 
 
  On Tue, Jul 12, 2022 at 11:12, Jiří Sléžka wrote:   On 
7/11/22 16:22, Jiří Sléžka wrote:
> On 7/11/22 15:57, Strahil Nikolov wrote:
>> Can you check for AVC denials and the error message like the described 
>> in 
>> https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?
> 
> thanks for reply, there are two unrelated (qemu-kvm) avc denials logged 
> (related probably to sanlock recovery)
> 
> also I cannot find glustereventsd in any related log... is it really 
> used by vdsm-gluster?
> 
> this service runs on no hosts
> 
> systemctl status glustereventsd
> ● glustereventsd.service - Gluster Events Notifier
>     Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service; 
> disabled; vendor preset: disabled)
>     Active: inactive (dead)

it looks like root of the problem is that Gluster service is disabled in 
cluster settings and cannot be enabled. But it was enabled before... 
also I have to manually install vdsm-gluster when I (re)install new 
host, but bricks from this host are in unknown state in admin. Maybe 
vdsm-gluster is not correctly configured? Maybe glustereventsd is not 
running? I am just guessing...

I have no access to other HCI installation so I cannot compare differences.

I would be really happy if someone could tell me what circumstances 
could disable Gluster service checkbox in admin and how to enable it 
again...

Cheers,

Jiri

> 
> Cheers,
> 
> Jiri
> 
> 
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>     On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka
>>      wrote:
>>     Hello,
>>
>>     On 7/11/22 14:34, Strahil Nikolov wrote:
>>  > Can you check something on the host:
>>  > cat /etc/glusterfs/eventsconfig.json
>>
>>     cat /etc/glusterfs/eventsconfig.json
>>     {
>>      "log-level": "INFO",
>>      "port": 24009,
>>      "disable-events-log": false
>>     }
>>
>>
>>  > semanage port -l | grep $(awk -F ':' '/port/ {gsub(",","",$2); 
>> print
>>  > $2}' /etc/glusterfs/eventsconfig.json)
>>
>>     semanage port -l | grep 24009
>>
>>     returns empty set, it looks like this port is not labeled
>>
>>     Cheers,
>>
>>     Jiri
>>
>>  >
>>  > Best Regards,
>>  > Strahil Nikolov
>>  > В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří Sléžka
>>  > mailto:jiri.sle...@slu.cz>> написа:
>>  >
>>  >
>>  > Hi,
>>  >
>>  > I would like to change CPU Type in my oVirt 4.4.10 HCI cluster
>>     (based on
>>  > 3 glusterfs/virt hosts). When I try to I got this error
>>  >
>>  > Error while executing action: Cannot disable gluster service on 
>> the
>>  > cluster as it contains volumes.
>>  >
>>  > As I remember I had Gluster Service enabled on this cluster but
>>     now both
>>  > (Enable Virt Services and Enable Gluster Service) checkboxes are
>>     grayed
>>  > out and Gluster Service is unchecked.
>>  >
>>  > Also Storage / Volumes displays my volumes... well, displays one
>>     brick
>>  > on particular host in unknown state (? mark) which is new
>>     situation. As
>>  > I can see from command line all bricks are online, no healing in
>>  > progress, all looks good...
>>  >
>>  > I am not sure if the second issue is relevant to first one so main
>>  > question is how can I (re)enable gluster service in my cluster?
>>  >
>>  > Thanks in advance,
>>  >
>>  > Jiri
>>  > ___
>>  > Users mailing list -- users@ovirt.org 
>>     >
>>  > To unsubscribe send an email to users-le...@ovirt.org
>>     
>>
>>  > >
>>  > Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>     
>>  > >     >
>>  > oVirt Code of Conduct:
>>  > https://www.ovirt.org/community/about/community-guidelines/
>>     
>>  > >     >
>>  > List Archives:
>>  >
>>    
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJSHR7P7K7OICSA5F253BVA/
>>  
>>
>>    
>> 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-12 Thread Jiří Sléžka

On 7/11/22 16:22, Jiří Sléžka wrote:

On 7/11/22 15:57, Strahil Nikolov wrote:
Can you check for AVC denials and the error message like the described 
in 
https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?


thanks for reply, there are two unrelated (qemu-kvm) avc denials logged 
(related probably to sanlock recovery)


also I cannot find glustereventsd in any related log... is it really 
used by vdsm-gluster?


this service runs on no hosts

systemctl status glustereventsd
● glustereventsd.service - Gluster Events Notifier
    Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service; 
disabled; vendor preset: disabled)

    Active: inactive (dead)


it looks like root of the problem is that Gluster service is disabled in 
cluster settings and cannot be enabled. But it was enabled before... 
also I have to manually install vdsm-gluster when I (re)install new 
host, but bricks from this host are in unknown state in admin. Maybe 
vdsm-gluster is not correctly configured? Maybe glustereventsd is not 
running? I am just guessing...


I have no access to other HCI installation so I cannot compare differences.

I would be really happy if someone could tell me what circumstances 
could disable Gluster service checkbox in admin and how to enable it 
again...


Cheers,

Jiri



Cheers,

Jiri





Best Regards,
Strahil Nikolov

    On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka
     wrote:
    Hello,

    On 7/11/22 14:34, Strahil Nikolov wrote:
 > Can you check something on the host:
 > cat /etc/glusterfs/eventsconfig.json

    cat /etc/glusterfs/eventsconfig.json
    {
     "log-level": "INFO",
     "port": 24009,
     "disable-events-log": false
    }


 > semanage port -l | grep $(awk -F ':' '/port/ {gsub(",","",$2); 
print

 > $2}' /etc/glusterfs/eventsconfig.json)

    semanage port -l | grep 24009

    returns empty set, it looks like this port is not labeled

    Cheers,

    Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 > В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří Sléžka
 > mailto:jiri.sle...@slu.cz>> написа:
 >
 >
 > Hi,
 >
 > I would like to change CPU Type in my oVirt 4.4.10 HCI cluster
    (based on
 > 3 glusterfs/virt hosts). When I try to I got this error
 >
 > Error while executing action: Cannot disable gluster service on 
the

 > cluster as it contains volumes.
 >
 > As I remember I had Gluster Service enabled on this cluster but
    now both
 > (Enable Virt Services and Enable Gluster Service) checkboxes are
    grayed
 > out and Gluster Service is unchecked.
 >
 > Also Storage / Volumes displays my volumes... well, displays one
    brick
 > on particular host in unknown state (? mark) which is new
    situation. As
 > I can see from command line all bricks are online, no healing in
 > progress, all looks good...
 >
 > I am not sure if the second issue is relevant to first one so main
 > question is how can I (re)enable gluster service in my cluster?
 >
 > Thanks in advance,
 >
 > Jiri
 > ___
 > Users mailing list -- users@ovirt.org 
    >
 > To unsubscribe send an email to users-le...@ovirt.org
    

 > >
 > Privacy Statement: https://www.ovirt.org/privacy-policy.html
    
 > >
 > oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/
    
 > >
 > List Archives:
 >

https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJSHR7P7K7OICSA5F253BVA/ 



 >

> 






___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJZMKJCPGC2E2PEBYEZWLPX5YUDHD76A/




smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- 

[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-11 Thread Jiří Sléžka

On 7/11/22 15:57, Strahil Nikolov wrote:
Can you check for AVC denials and the error message like the described 
in https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?


thanks for reply, there are two unrelated (qemu-kvm) avc denials logged 
(related probably to sanlock recovery)


also I cannot find glustereventsd in any related log... is it really 
used by vdsm-gluster?


this service runs on no hosts

systemctl status glustereventsd
● glustereventsd.service - Gluster Events Notifier
   Loaded: loaded (/usr/lib/systemd/system/glustereventsd.service; 
disabled; vendor preset: disabled)

   Active: inactive (dead)

Cheers,

Jiri





Best Regards,
Strahil Nikolov

On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka
 wrote:
Hello,

On 7/11/22 14:34, Strahil Nikolov wrote:
 > Can you check something on the host:
 > cat /etc/glusterfs/eventsconfig.json

cat /etc/glusterfs/eventsconfig.json
{
     "log-level": "INFO",
     "port": 24009,
     "disable-events-log": false
}


 > semanage port -l | grep $(awk -F ':' '/port/ {gsub(",","",$2); print
 > $2}' /etc/glusterfs/eventsconfig.json)

semanage port -l | grep 24009

returns empty set, it looks like this port is not labeled

Cheers,

Jiri

 >
 > Best Regards,
 > Strahil Nikolov
 > В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří Sléžka
 > mailto:jiri.sle...@slu.cz>> написа:
 >
 >
 > Hi,
 >
 > I would like to change CPU Type in my oVirt 4.4.10 HCI cluster
(based on
 > 3 glusterfs/virt hosts). When I try to I got this error
 >
 > Error while executing action: Cannot disable gluster service on the
 > cluster as it contains volumes.
 >
 > As I remember I had Gluster Service enabled on this cluster but
now both
 > (Enable Virt Services and Enable Gluster Service) checkboxes are
grayed
 > out and Gluster Service is unchecked.
 >
 > Also Storage / Volumes displays my volumes... well, displays one
brick
 > on particular host in unknown state (? mark) which is new
situation. As
 > I can see from command line all bricks are online, no healing in
 > progress, all looks good...
 >
 > I am not sure if the second issue is relevant to first one so main
 > question is how can I (re)enable gluster service in my cluster?
 >
 > Thanks in advance,
 >
 > Jiri
 > ___
 > Users mailing list -- users@ovirt.org 
>
 > To unsubscribe send an email to users-le...@ovirt.org


 > >
 > Privacy Statement: https://www.ovirt.org/privacy-policy.html

 > >
 > oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/

 > >
 > List Archives:
 >

https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJSHR7P7K7OICSA5F253BVA/


 >

>





smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJZMKJCPGC2E2PEBYEZWLPX5YUDHD76A/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-07-11 Thread Strahil Nikolov via Users
Can you check for AVC denials and the error message like the described in 
https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?

 Best Regards,Strahil Nikolov
 
  On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka wrote:   Hello,

On 7/11/22 14:34, Strahil Nikolov wrote:
> Can you check something on the host:
> cat /etc/glusterfs/eventsconfig.json

cat /etc/glusterfs/eventsconfig.json
{
    "log-level": "INFO",
    "port": 24009,
    "disable-events-log": false
}


> semanage port -l | grep $(awk -F ':' '/port/ {gsub(",","",$2); print 
> $2}' /etc/glusterfs/eventsconfig.json)

semanage port -l | grep 24009

returns empty set, it looks like this port is not labeled

Cheers,

Jiri

> 
> Best Regards,
> Strahil Nikolov
> В понеделник, 11 юли 2022 г., 02:18:57 ч. Гринуич+3, Jiří Sléžka 
>  написа:
> 
> 
> Hi,
> 
> I would like to change CPU Type in my oVirt 4.4.10 HCI cluster (based on
> 3 glusterfs/virt hosts). When I try to I got this error
> 
> Error while executing action: Cannot disable gluster service on the
> cluster as it contains volumes.
> 
> As I remember I had Gluster Service enabled on this cluster but now both
> (Enable Virt Services and Enable Gluster Service) checkboxes are grayed
> out and Gluster Service is unchecked.
> 
> Also Storage / Volumes displays my volumes... well, displays one brick
> on particular host in unknown state (? mark) which is new situation. As
> I can see from command line all bricks are online, no healing in
> progress, all looks good...
> 
> I am not sure if the second issue is relevant to first one so main
> question is how can I (re)enable gluster service in my cluster?
> 
> Thanks in advance,
> 
> Jiri
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4NVCQ33ZSJSHR7P7K7OICSA5F253BVA/
>  
> 

  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SO7QW75J3OAKTF4JDBBRFU3ZN6NQI7BP/


[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-29 Thread Diego Ercolani
Cross posted here: 
https://lists.gluster.org/pipermail/gluster-users/2022-June/039957.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HGBGRBAPKCQ5XSATL2JMBVFPTKGCLYOQ/


[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-28 Thread Gilboa Davara
Have you tried using the gluster ML?
https://lists.gluster.org/mailman/listinfo/gluster-users

- Gilboa

On Tue, Jun 28, 2022 at 11:20 AM Diego Ercolani 
wrote:

> I've done something but the problem remain:
> [root@ovirt-node2 ~]# gluster volume heal glen info
> Brick ovirt-node2.ovirt:/brickhe/glen
> /3577c21e-f757-4405-97d1-0f827c9b4e22/master/tasks
> Status: Connected
> Number of entries: 1
>
> Brick ovirt-node3.ovirt:/brickhe/glen
> /3577c21e-f757-4405-97d1-0f827c9b4e22/images
> Status: Connected
> Number of entries: 1
>
> Brick ovirt-node4.ovirt:/dati/glen
> /3577c21e-f757-4405-97d1-0f827c9b4e22/master/tasks
> /3577c21e-f757-4405-97d1-0f827c9b4e22/images
> Status: Connected
> Number of entries: 2
>
> And I cannot invoke healing:
> [root@ovirt-node2 ~]# gluster volume heal glen full
> Launching heal operation to perform full self heal on volume glen has been
> successful
> Use heal info commands to check status.
> [root@ovirt-node2 ~]# gluster volume heal glen split-brain source-brick
> ovirt-node3.ovirt:/brickhe/glen
> 'source-brick' option used on a directory
> (gfid:95e5075e-720b-4bc0-affe-81d1792e09a6). Performing conservative merge.
> Healing gfid:95e5075e-720b-4bc0-affe-81d1792e09a6 failed:Is a directory.
> Lookup failed on gfid:75441538-fc18-4da3-9da7-e1c59a84d950:No such file or
> directory.
> Status: Connected
> Number of healed entries: 0
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YN3TOB45KTKXAMZNGNEAMPUM7ML2I6W3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7K5QQLWIPOBQTP5C7UA75D5TTHAWDGTD/


[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-28 Thread Diego Ercolani
I've done something but the problem remain:
[root@ovirt-node2 ~]# gluster volume heal glen info
Brick ovirt-node2.ovirt:/brickhe/glen
/3577c21e-f757-4405-97d1-0f827c9b4e22/master/tasks 
Status: Connected
Number of entries: 1

Brick ovirt-node3.ovirt:/brickhe/glen
/3577c21e-f757-4405-97d1-0f827c9b4e22/images 
Status: Connected
Number of entries: 1

Brick ovirt-node4.ovirt:/dati/glen
/3577c21e-f757-4405-97d1-0f827c9b4e22/master/tasks 
/3577c21e-f757-4405-97d1-0f827c9b4e22/images 
Status: Connected
Number of entries: 2

And I cannot invoke healing:
[root@ovirt-node2 ~]# gluster volume heal glen full
Launching heal operation to perform full self heal on volume glen has been 
successful 
Use heal info commands to check status.
[root@ovirt-node2 ~]# gluster volume heal glen split-brain source-brick 
ovirt-node3.ovirt:/brickhe/glen
'source-brick' option used on a directory 
(gfid:95e5075e-720b-4bc0-affe-81d1792e09a6). Performing conservative merge.
Healing gfid:95e5075e-720b-4bc0-affe-81d1792e09a6 failed:Is a directory.
Lookup failed on gfid:75441538-fc18-4da3-9da7-e1c59a84d950:No such file or 
directory.
Status: Connected
Number of healed entries: 0

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YN3TOB45KTKXAMZNGNEAMPUM7ML2I6W3/


[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-24 Thread Diego Ercolani
Anyone can me address to somewhere where I can read some "in deep" 
throubleshots for Glusterfs? I cannot find a "quick" manual
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DVKNVA7ITKFWHDG2BZAK6PLNTMZODPKZ/


[ovirt-users] Re: Gluster Volume cannot be activated Ovirt 4.5 Centos 8 Stream

2022-06-21 Thread m . rohweder
> see this if it's the case:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL73Z7MEKEN...
Hi,

yes this fix my problem.

Quick and dirty :-)
Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADAAOICOKYWW5DL4VRSC7SAJTAM7TU2S/


[ovirt-users] Re: Gluster Volume cannot be activated Ovirt 4.5 Centos 8 Stream

2022-06-21 Thread Diego Ercolani
see this if it's the case: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL73Z7MEKENSEON5F7PKQL5KJYAWO3LS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KUVDT6DUAO7RGQJ2AO5P3HIJUUHSAYXY/


[ovirt-users] Re: gluster heal success but a directory doesn't heal

2022-06-20 Thread Diego Ercolani
My Environment is ovirt-host-4.5.0-3.el8.x86_64 and 
glusterfs-server-10.2-1.el8s.x86_64
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDLSOWEW4F7IGP46LEK6Q54L74HBJ5ZN/


[ovirt-users] Re: Gluster storage and TRIM VDO

2022-04-08 Thread Oleh Horbachov
I have 6 hosts in oVirt cluster with same install and affected only one

Best regards,
Oleh Horbachov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LRHWE2YTMYVMTIPWP6AWFIGTPYWZSXRU/


[ovirt-users] Re: Gluster storage and TRIM VDO

2022-04-08 Thread Oleh Horbachov
Hello. The issue is at same time difficult and easy to reproduce. 
Easy because I can reproduce 1 time from 3-4 run
Difficult because I cant to catch start and stop the issue 
Answers to your questions
1. In general disks responsive during the fstrim
2. I run trigger every week but ih different day of week for each storage
3. I use nvme disks.

I tried to do reproduce the issue manually and found following
1. Affected only for one host of storage cluster (i think because main mount 
point) 
2. In ovirt events found
"Host ovirt-host-03 cannot access the Storage Domain(s) glusterfs-data attached 
to the Data Center Computing. Setting Host state to Non-Operational." and after 
that event oVirt started migration from host-03

I think the issue is VDSM
In additional found at host  ovirt-host-03 in vdsm.log few errors at time 
during fstrim like 
"ERROR (monitor/2ac7658) [storage.Monitor] Error checking domain 
2ac76580-2182-470d-b886-d3d2e28d05b3 (monitor:453)"
2ac... - uuid gluster domain

I have 6 hosts in cluster with same install and affected only one
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQPM6245G7QLCUJIDAEUT2QNZT2M5J6D/


[ovirt-users] Re: Gluster storage and TRIM VDO

2022-04-03 Thread Strahil Nikolov via Users
This is quite odd. Is your raw disks responsive during the fstrim ?What happens 
if you trigger it more often ?
What disks do you use ?
Best Regards,Strahil Nikolov
 
 
  On Tue, Mar 29, 2022 at 10:06, Oleh Horbachov wrote:   
Hello everyone. I have a Gluster distributed replication cluster deployed. The 
cluster - store for ovirt. For bricks - VDO over a raw disk. When discarding 
via 'fstrim -av' the storage hangs for a few seconds and the connection is 
lost. Does anyone know the best practices for using TRIM with VDO in the 
context of ovirt?
ovirt - v4.4.10
gluster - v8.6
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCTN2ZIG3EDVUU5COPXLMOH2T6WHTPBB/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WJHNNMEN463YTA3LAPYXQIN6COBY4AVG/


[ovirt-users] Re: gluster in ovirt-node in 4.5

2022-03-24 Thread Gobinda Das
Adding @Ritesh Chikatwar 


On Thu, Mar 24, 2022 at 7:36 PM Yedidyah Bar David  wrote:

> Hi all,
>
> In relation to a recent question here (thread "[ovirt-devel] [ANN]
> Schedule for oVirt 4.5.0"), we are now blocked with the following
> chain of changes/dependencies:
>
> 1. ovirt-ansible-collection recently moved from ansible-2.9 to
> ansible-core 2.12.
> 2. ovirt-hosted-engine-setup followed it.
> 3. ovirt-release-host-node (the package including dependencies for
> ovirt-node) requires gluster-ansible-roles.
> 4. gluster-ansible-roles right now requires 'ansible >= 2.9' (not
> core), and I only checked one of its dependencies,
> gluster-ansible-infra, and this one requires 'ansible >= 2.5'.
>
We will check and evaluate how much effort is required.

> 5. ansible-core does not 'Provide: ansible', IIUC intentionally.
>
> So we should do one of:
>
> 1. Fix gluster-ansible* packages to work with ansible-core 2.12.
>
> 2. Only patch gluster-ansible* packages to require ansible-core,
> without making sure they actually work with it. This will satisfy all
> deps (I guess), make the thing installable, but will likely break when
> actually used. Not sure it's such a good option, but nonetheless
> relevant. Might make sense if someone is going to work on (1.) soon
> but not immediately. This is what would have happened in practice, if
> ansible-core would have 'Provide:'-ed ansible.
>
> 3. Patch ovirt-release-host-node to not require gluster-ansible*
> anymore. This means it will not be included in ovirt-node. Users that
> will want to use it will have to install the dependencies manually,
> somehow, presumably after (1.) is done independently.
>
> Our team (RHV integration) does not have capacity for (1.). I intend
> to do (3.) very soon, unless we get volunteers for doing (1.) or
> strong voices for (2.).
>
> Best regards,
> --
> Didi
>
>

-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GHESTEPWJSPPFBIC52ZEOCDKYP24DHC5/


[ovirt-users] Re: Gluster issue with brick going down

2022-03-22 Thread Jiří Sléžka

Hi,

On 3/21/22 14:12, Chris Adams wrote:

I have a hyper-converged cluster running oVirt 4.4.10 and Gluster 8.6.
Periodically, one brick of one volume will drop out, but it's seemingly
random as to which volume and brick is affected.  All I see in the brick
log is:

[2022-03-19 13:27:36.360727] W [MSGID: 113075] 
[posix-helpers.c:2135:posix_fs_health_check] 0-vmstore-posix: 
aio_read_cmp_buf() on /gluster_bricks/vmstore/vmstore/.glusterfs/health_check 
returned ret is -1 error is Structure needs cleaning
[2022-03-19 13:27:36.361160] M [MSGID: 113075] 
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vmstore-posix: 
health-check failed, going down
[2022-03-19 13:27:36.361395] M [MSGID: 113075] 
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vmstore-posix: still 
alive! -> SIGTERM

Searching around, I see references to similar issues, but no real
solutions.  I see a suggestion that changing the health-check-interval
from 10 to 30 seconds helps, but it looks like 30 seconds is the default
with this version of Gluster (and I don't see it explicitly set for any
of my volumes).

While "Structure needs cleaning" appears to be an XFS filesystem error,
I don't see any XFS errors from the kernel.

This is a low I/O cluster - the storage network is on two 10 gig
switches with a two-port LAG to each server, but typically is only
seeing a few tens of megabits per second.


I experience the same behavior. Workaround could be disabling 
health-check like


gluster volume set  storage.health-check-interval 0

In my case it helped with bricks randomly going offline.

There is something else broken in my hci cluster because I have also 
problem with sanlock which time to time cannot renew lock and wdmd 
reboots one or two hosts. I still cannot find the root of this behavior 
but it is probably hw related.


Cheers,

Jiri







smime.p7s
Description: S/MIME Cryptographic Signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4KBSP3Y7WEBTB6XV7L2P2RJPMD2E5ZA/


[ovirt-users] Re: Gluster Performance issues

2022-02-25 Thread Sunil Kumar Heggodu Gopala Acharya
Regards,
Sunil


On Wed, Feb 23, 2022 at 7:34 PM Derek Atkins  wrote:

> Have you verified that you're actually getting 10Gbps between the hosts?
>
> -derek
>
> On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> > Hello Derek,
> >
> > We have a 10Gig connection dedicated to the storage network, nothing else
> > is on that switch.
> >
> > On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
> >
> >> Hi,
> >>
> >> Another question which I don't see answered:   What is the underlying
> >> connectivity between the Gluster hosts?
> >>
> >> -derek
> >>
> >> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
> >> > Hello Sunil,
> >> >
> >> > [root@ovirt1 ~]# gluster --version
> >> > glusterfs 8.6
> >> >
> >> > same on all hosts
>
Latest Release-10.1(
https://lists.gluster.org/pipermail/gluster-users/2022-February/039761.html)
has some performance fixes which should help in this situation compared to
the older gluster bits.

> >> >
> >> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
> >> > shegg...@redhat.com> wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> Which version of gluster is in use?
> >> >>
> >> >> Regards,
> >> >>
> >> >> Sunil kumar Acharya
> >> >>
> >> >> Red Hat
> >> >>
> >> >> 
> >> >>
> >> >> T: +91-8067935170
> >> >> 
> >> >>
> >> >> 
> >> >> TRIED. TESTED. TRUSTED. 
> >> >>
> >> >>
> >> >>
> >> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
> >> 
> >> >> wrote:
> >> >>
> >> >>> Hello All,
> >> >>>
> >> >>> We have 3 servers with a raid 50 array each, we are having extreme
> >> >>> performance issues with our gluster, writes on gluster seem to take
> >> at
> >> >>> least 3 times longer than on the raid directly. Can this be
> >> improved?
> >> >>> I've
> >> >>> read through several other performance issues threads but have been
> >> >>> unable
> >> >>> to make any improvements
> >> >>>
> >> >>> "gluster volume info" and "gluster volume profile vmstore info" is
> >> >>> below
> >> >>>
> >> >>>
> >> >>>
> >>
> =
> >> >>>
> >> >>> -Inside Gluster - test took 35+ hours:
> >> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d .
> -s
> >> >>> 600G -n 0 -m TEST -f -b -u root
> >> >>> Using uid:0, gid:0.
> >> >>> Writing intelligently...done
> >> >>> Rewriting...done
> >> >>> Reading intelligently...done
> >> >>> start 'em...done...done...done...done...done...
> >> >>> Version  1.98   --Sequential Output-- --Sequential
> >> Input-
> >> >>> --Random-
> >> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
> >> --Block--
> >> >>> --Seeks--
> >> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> >> %CP
> >> >>>  /sec %CP
> >> >>> TEST   600G   35.7m  17 5824k   7112m
> >> 13
> >> >>> 182.7   6
> >> >>> Latency5466ms   12754ms  3499ms
> >> >>>  1589ms
> >> >>>
> >> >>>
> >> >>>
> >>
> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
> >> >>>
> >> >>>
> >> >>>
> >>
> =
> >> >>>
> >> >>> -Outside Gluster - test took 18 minutes:
> >> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d .
> -s
> >> >>> 600G -n 0 -m TEST -f -b -u root
> >> >>> Using uid:0, gid:0.
> >> >>> Writing intelligently...done
> >> >>> Rewriting...done
> >> >>> Reading intelligently...done
> >> >>> start 'em...done...done...done...done...done...
> >> >>> Version  1.98   --Sequential Output-- --Sequential
> >> Input-
> >> >>> --Random-
> >> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
> >> --Block--
> >> >>> --Seeks--
> >> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
> >> %CP
> >> >>>  /sec %CP
> >> >>> TEST   600G567m  78  149m  30307m
> >> 37
> >> >>>  83.0  57
> >> >>> Latency 205ms4630ms  1450ms
> >> >>> 679ms
> >> >>>
> >> >>>
> >> >>>
> >>
> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
> >> >>>
> >> >>>
> >> >>>
> >>
> =
> >> >>>
> >> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume
> >> info
> >> >>> Volume Name: engine
> >> >>> Type: Replicate
> >> >>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
> >> >>> Status: Started
> >> >>> Snapshot Count: 0
> >> >>> Number of Bricks: 1 x 3 = 3
> >> >>> Transport-type: tcp
> >> >>> Bricks:
> >> >>> Brick1: 

[ovirt-users] Re: Gluster Performance issues

2022-02-24 Thread Dhanaraj Ramesh via Users
Since RAID config add additional io penalties, the RAID should be removed and 
have to add each of the disks directly to GLUSTER  configs.

We had similar issues with RAID, after eliminating that we are good with 
performances

Get Outlook for iOS<https://aka.ms/o0ukef>

From: Strahil Nikolov via Users 
Sent: Thursday, February 24, 2022 3:43:39 AM
To: Alex Morrison ; Sunil Kumar Heggodu Gopala Acharya 

Cc: users@ovirt.org 
Subject: [ovirt-users] Re: Gluster Performance issues

You can try to play a little bit with the I/O threads (but don't jump too fast).

What is your I/O scheduler and mount options.
You can reduce I/O lookups if you specify the 'noatime' and the selinux context 
on the mount options.

A real killer of performance is the lattency. What is the lattency between all 
nodes ?


Best Regards,
Strahil Nikolov

On Wed, Feb 23, 2022 at 20:26, Alex Morrison
 wrote:
Hello All,

I believe the network is performing as expected, I did an iperf test:

[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[  5] local 10.10.1.1 port 38422 connected to 10.10.1.2 port 5201
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec0   2.96 MBytes
[  5]   1.00-2.00   sec  1.03 GBytes  8.81 Gbits/sec0   2.96 MBytes
[  5]   2.00-3.00   sec  1006 MBytes  8.44 Gbits/sec  101   1.45 MBytes
[  5]   3.00-4.00   sec  1.04 GBytes  8.92 Gbits/sec5901 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  9.01 Gbits/sec0957 KBytes
[  5]   5.00-6.00   sec  1.08 GBytes  9.23 Gbits/sec0990 KBytes
[  5]   6.00-7.00   sec  1008 MBytes  8.46 Gbits/sec  159655 KBytes
[  5]   7.00-8.00   sec  1.06 GBytes  9.11 Gbits/sec0970 KBytes
[  5]   8.00-9.00   sec  1.03 GBytes  8.85 Gbits/sec2829 KBytes
[  5]   9.00-10.00  sec  1.04 GBytes  8.96 Gbits/sec0947 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  10.4 GBytes  8.90 Gbits/sec  267 sender
[  5]   0.00-10.04  sec  10.4 GBytes  8.87 Gbits/sec  receiver

iperf Done.

On Wed, Feb 23, 2022 at 11:45 AM Sunil Kumar Heggodu Gopala Acharya 
mailto:shegg...@redhat.com>> wrote:

Regards,
Sunil


On Wed, Feb 23, 2022 at 7:34 PM Derek Atkins 
mailto:de...@ihtfp.com>> wrote:
Have you verified that you're actually getting 10Gbps between the hosts?

-derek

On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> Hello Derek,
>
> We have a 10Gig connection dedicated to the storage network, nothing else
> is on that switch.
>
> On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins 
> mailto:de...@ihtfp.com>> wrote:
>
>> Hi,
>>
>> Another question which I don't see answered:   What is the underlying
>> connectivity between the Gluster hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> > Hello Sunil,
>> >
>> > [root@ovirt1 ~]# gluster --version
>> > glusterfs 8.6
>> >
>> > same on all hosts
Latest 
Release-10.1(https://lists.gluster.org/pipermail/gluster-users/2022-February/039761.html)
 has some performance fixes which should help in this situation compared to the 
older gluster bits.
>> >
>> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> > shegg...@redhat.com<mailto:shegg...@redhat.com>> wrote:
>> >
>> >> Hi,
>> >>
>> >> Which version of gluster is in use?
>> >>
>> >> Regards,
>> >>
>> >> Sunil kumar Acharya
>> >>
>> >> Red Hat
>> >>
>> >> <https://www.redhat.com>
>> >>
>> >> T: +91-8067935170
>> >> <http://redhatemailsignature-marketing.itos.redhat.com/>
>> >>
>> >> <https://red.ht/sig>
>> >> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>> >>
>> >>
>> >>
>> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> mailto:a...@discoverygarden.ca>>
>> >> wrote:
>> >>
>> >>> Hello All,
>> >>>
>> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >>> performance issues with our gluster, writes on gluster seem to take
>> at
>> >>> least 3 times longer than on the raid directly. Can this be
>> improved?
>> >>> I've
>> >>> read through several other performance issues threads but have been
>> >>> unable
>> >>> to make any improvements
>> >>>
&g

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Strahil Nikolov via Users
You can try to play a little bit with the I/O threads (but don't jump too fast).
What is your I/O scheduler and mount options.You can reduce I/O lookups if you 
specify the 'noatime' and the selinux context on the mount options.
A real killer of performance is the lattency. What is the lattency between all 
nodes ?

Best Regards,Strahil Nikolov
 
 
  On Wed, Feb 23, 2022 at 20:26, Alex Morrison wrote:  
 Hello All,
I believe the network is performing as expected, I did an iperf test:

[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[  5] local 10.10.1.1 port 38422 connected to 10.10.1.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec    0   2.96 MBytes
[  5]   1.00-2.00   sec  1.03 GBytes  8.81 Gbits/sec    0   2.96 MBytes
[  5]   2.00-3.00   sec  1006 MBytes  8.44 Gbits/sec  101   1.45 MBytes
[  5]   3.00-4.00   sec  1.04 GBytes  8.92 Gbits/sec    5    901 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  9.01 Gbits/sec    0    957 KBytes
[  5]   5.00-6.00   sec  1.08 GBytes  9.23 Gbits/sec    0    990 KBytes
[  5]   6.00-7.00   sec  1008 MBytes  8.46 Gbits/sec  159    655 KBytes
[  5]   7.00-8.00   sec  1.06 GBytes  9.11 Gbits/sec    0    970 KBytes
[  5]   8.00-9.00   sec  1.03 GBytes  8.85 Gbits/sec    2    829 KBytes
[  5]   9.00-10.00  sec  1.04 GBytes  8.96 Gbits/sec    0    947 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  10.4 GBytes  8.90 Gbits/sec  267             sender
[  5]   0.00-10.04  sec  10.4 GBytes  8.87 Gbits/sec                  receiver

iperf Done.

On Wed, Feb 23, 2022 at 11:45 AM Sunil Kumar Heggodu Gopala Acharya 
 wrote:


Regards,
Sunil


On Wed, Feb 23, 2022 at 7:34 PM Derek Atkins  wrote:

Have you verified that you're actually getting 10Gbps between the hosts?

-derek

On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> Hello Derek,
>
> We have a 10Gig connection dedicated to the storage network, nothing else
> is on that switch.
>
> On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
>
>> Hi,
>>
>> Another question which I don't see answered:   What is the underlying
>> connectivity between the Gluster hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> > Hello Sunil,
>> >
>> > [root@ovirt1 ~]# gluster --version
>> > glusterfs 8.6
>> >
>> > same on all hosts

Latest 
Release-10.1(https://lists.gluster.org/pipermail/gluster-users/2022-February/039761.html)
 has some performance fixes which should help in this situation compared to the 
older gluster bits. 
>> >
>> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> > shegg...@redhat.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> Which version of gluster is in use?
>> >>
>> >> Regards,
>> >>
>> >> Sunil kumar Acharya
>> >>
>> >> Red Hat
>> >>
>> >> 
>> >>
>> >> T: +91-8067935170
>> >> 
>> >>
>> >> 
>> >> TRIED. TESTED. TRUSTED. 
>> >>
>> >>
>> >>
>> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> 
>> >> wrote:
>> >>
>> >>> Hello All,
>> >>>
>> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >>> performance issues with our gluster, writes on gluster seem to take
>> at
>> >>> least 3 times longer than on the raid directly. Can this be
>> improved?
>> >>> I've
>> >>> read through several other performance issues threads but have been
>> >>> unable
>> >>> to make any improvements
>> >>>
>> >>> "gluster volume info" and "gluster volume profile vmstore info" is
>> >>> below
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> -Inside Gluster - test took 35+ hours:
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> >>> 600G -n 0 -m TEST -f -b -u root
>> >>> Using uid:0, gid:0.
>> >>> Writing intelligently...done
>> >>> Rewriting...done
>> >>> Reading intelligently...done
>> >>> start 'em...done...done...done...done...done...
>> >>> Version  1.98       --Sequential Output-- --Sequential
>> Input-
>> >>> --Random-
>> >>>                     -Per Chr- --Block-- -Rewrite- -Per Chr-
>> --Block--
>> >>> --Seeks--
>> >>> Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> %CP
>> >>>  /sec %CP
>> >>> TEST           600G           35.7m  17 5824k   7            112m
>> 13
>> >>> 182.7   6
>> >>> Latency                        5466ms   12754ms              3499ms
>> >>>  1589ms
>> >>>
>> >>>
>> >>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>> >>>
>> >>>
>> >>>
>> 

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello All,

I believe the network is performing as expected, I did an iperf test:

[root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# iperf3 -c 10.10.1.2
Connecting to host 10.10.1.2, port 5201
[  5] local 10.10.1.1 port 38422 connected to 10.10.1.2 port 5201
[ ID] Interval   Transfer Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  1.08 GBytes  9.24 Gbits/sec0   2.96 MBytes
[  5]   1.00-2.00   sec  1.03 GBytes  8.81 Gbits/sec0   2.96 MBytes
[  5]   2.00-3.00   sec  1006 MBytes  8.44 Gbits/sec  101   1.45 MBytes
[  5]   3.00-4.00   sec  1.04 GBytes  8.92 Gbits/sec5901 KBytes
[  5]   4.00-5.00   sec  1.05 GBytes  9.01 Gbits/sec0957 KBytes
[  5]   5.00-6.00   sec  1.08 GBytes  9.23 Gbits/sec0990 KBytes
[  5]   6.00-7.00   sec  1008 MBytes  8.46 Gbits/sec  159655 KBytes
[  5]   7.00-8.00   sec  1.06 GBytes  9.11 Gbits/sec0970 KBytes
[  5]   8.00-9.00   sec  1.03 GBytes  8.85 Gbits/sec2829 KBytes
[  5]   9.00-10.00  sec  1.04 GBytes  8.96 Gbits/sec0947 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval   Transfer Bitrate Retr
[  5]   0.00-10.00  sec  10.4 GBytes  8.90 Gbits/sec  267 sender
[  5]   0.00-10.04  sec  10.4 GBytes  8.87 Gbits/sec
 receiver

iperf Done.

On Wed, Feb 23, 2022 at 11:45 AM Sunil Kumar Heggodu Gopala Acharya <
shegg...@redhat.com> wrote:

>
> Regards,
> Sunil
>
>
> On Wed, Feb 23, 2022 at 7:34 PM Derek Atkins  wrote:
>
>> Have you verified that you're actually getting 10Gbps between the hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
>> > Hello Derek,
>> >
>> > We have a 10Gig connection dedicated to the storage network, nothing
>> else
>> > is on that switch.
>> >
>> > On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
>> >
>> >> Hi,
>> >>
>> >> Another question which I don't see answered:   What is the underlying
>> >> connectivity between the Gluster hosts?
>> >>
>> >> -derek
>> >>
>> >> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> >> > Hello Sunil,
>> >> >
>> >> > [root@ovirt1 ~]# gluster --version
>> >> > glusterfs 8.6
>> >> >
>> >> > same on all hosts
>>
> Latest Release-10.1(
> https://lists.gluster.org/pipermail/gluster-users/2022-February/039761.html)
> has some performance fixes which should help in this situation compared to
> the older gluster bits.
>
>> >> >
>> >> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> >> > shegg...@redhat.com> wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> Which version of gluster is in use?
>> >> >>
>> >> >> Regards,
>> >> >>
>> >> >> Sunil kumar Acharya
>> >> >>
>> >> >> Red Hat
>> >> >>
>> >> >> 
>> >> >>
>> >> >> T: +91-8067935170
>> >> >> 
>> >> >>
>> >> >> 
>> >> >> TRIED. TESTED. TRUSTED. 
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> >> 
>> >> >> wrote:
>> >> >>
>> >> >>> Hello All,
>> >> >>>
>> >> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >> >>> performance issues with our gluster, writes on gluster seem to take
>> >> at
>> >> >>> least 3 times longer than on the raid directly. Can this be
>> >> improved?
>> >> >>> I've
>> >> >>> read through several other performance issues threads but have been
>> >> >>> unable
>> >> >>> to make any improvements
>> >> >>>
>> >> >>> "gluster volume info" and "gluster volume profile vmstore info" is
>> >> >>> below
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> =
>> >> >>>
>> >> >>> -Inside Gluster - test took 35+ hours:
>> >> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d .
>> -s
>> >> >>> 600G -n 0 -m TEST -f -b -u root
>> >> >>> Using uid:0, gid:0.
>> >> >>> Writing intelligently...done
>> >> >>> Rewriting...done
>> >> >>> Reading intelligently...done
>> >> >>> start 'em...done...done...done...done...done...
>> >> >>> Version  1.98   --Sequential Output-- --Sequential
>> >> Input-
>> >> >>> --Random-
>> >> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
>> >> --Block--
>> >> >>> --Seeks--
>> >> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> >> %CP
>> >> >>>  /sec %CP
>> >> >>> TEST   600G   35.7m  17 5824k   7112m
>> >> 13
>> >> >>> 182.7   6
>> >> >>> Latency5466ms   12754ms  3499ms
>> >> >>>  1589ms
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>> >> >>>
>> >> >>>
>> >> >>>
>> >>
>> =
>> >> >>>
>> >> >>> -Outside 

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Derek Atkins
Have you verified that you're actually getting 10Gbps between the hosts?

-derek

On Wed, February 23, 2022 9:02 am, Alex Morrison wrote:
> Hello Derek,
>
> We have a 10Gig connection dedicated to the storage network, nothing else
> is on that switch.
>
> On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:
>
>> Hi,
>>
>> Another question which I don't see answered:   What is the underlying
>> connectivity between the Gluster hosts?
>>
>> -derek
>>
>> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
>> > Hello Sunil,
>> >
>> > [root@ovirt1 ~]# gluster --version
>> > glusterfs 8.6
>> >
>> > same on all hosts
>> >
>> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
>> > shegg...@redhat.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> Which version of gluster is in use?
>> >>
>> >> Regards,
>> >>
>> >> Sunil kumar Acharya
>> >>
>> >> Red Hat
>> >>
>> >> 
>> >>
>> >> T: +91-8067935170
>> >> 
>> >>
>> >> 
>> >> TRIED. TESTED. TRUSTED. 
>> >>
>> >>
>> >>
>> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison
>> 
>> >> wrote:
>> >>
>> >>> Hello All,
>> >>>
>> >>> We have 3 servers with a raid 50 array each, we are having extreme
>> >>> performance issues with our gluster, writes on gluster seem to take
>> at
>> >>> least 3 times longer than on the raid directly. Can this be
>> improved?
>> >>> I've
>> >>> read through several other performance issues threads but have been
>> >>> unable
>> >>> to make any improvements
>> >>>
>> >>> "gluster volume info" and "gluster volume profile vmstore info" is
>> >>> below
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> -Inside Gluster - test took 35+ hours:
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> >>> 600G -n 0 -m TEST -f -b -u root
>> >>> Using uid:0, gid:0.
>> >>> Writing intelligently...done
>> >>> Rewriting...done
>> >>> Reading intelligently...done
>> >>> start 'em...done...done...done...done...done...
>> >>> Version  1.98   --Sequential Output-- --Sequential
>> Input-
>> >>> --Random-
>> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
>> --Block--
>> >>> --Seeks--
>> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> %CP
>> >>>  /sec %CP
>> >>> TEST   600G   35.7m  17 5824k   7112m
>> 13
>> >>> 182.7   6
>> >>> Latency5466ms   12754ms  3499ms
>> >>>  1589ms
>> >>>
>> >>>
>> >>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> -Outside Gluster - test took 18 minutes:
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> >>> 600G -n 0 -m TEST -f -b -u root
>> >>> Using uid:0, gid:0.
>> >>> Writing intelligently...done
>> >>> Rewriting...done
>> >>> Reading intelligently...done
>> >>> start 'em...done...done...done...done...done...
>> >>> Version  1.98   --Sequential Output-- --Sequential
>> Input-
>> >>> --Random-
>> >>> -Per Chr- --Block-- -Rewrite- -Per Chr-
>> --Block--
>> >>> --Seeks--
>> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
>> %CP
>> >>>  /sec %CP
>> >>> TEST   600G567m  78  149m  30307m
>> 37
>> >>>  83.0  57
>> >>> Latency 205ms4630ms  1450ms
>> >>> 679ms
>> >>>
>> >>>
>> >>>
>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
>> >>>
>> >>>
>> >>>
>> =
>> >>>
>> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume
>> info
>> >>> Volume Name: engine
>> >>> Type: Replicate
>> >>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>> >>> Status: Started
>> >>> Snapshot Count: 0
>> >>> Number of Bricks: 1 x 3 = 3
>> >>> Transport-type: tcp
>> >>> Bricks:
>> >>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>> >>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>> >>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>> >>> Options Reconfigured:
>> >>> cluster.granular-entry-heal: enable
>> >>> performance.strict-o-direct: on
>> >>> network.ping-timeout: 30
>> >>> storage.owner-gid: 36
>> >>> storage.owner-uid: 36
>> >>> server.event-threads: 4
>> >>> client.event-threads: 4
>> >>> cluster.choose-local: off
>> >>> user.cifs: off
>> >>> features.shard: on
>> >>> cluster.shd-wait-qlength: 1
>> >>> cluster.shd-max-threads: 8

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello Derek,

We have a 10Gig connection dedicated to the storage network, nothing else
is on that switch.

On Wed, Feb 23, 2022 at 9:49 AM Derek Atkins  wrote:

> Hi,
>
> Another question which I don't see answered:   What is the underlying
> connectivity between the Gluster hosts?
>
> -derek
>
> On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
> > Hello Sunil,
> >
> > [root@ovirt1 ~]# gluster --version
> > glusterfs 8.6
> >
> > same on all hosts
> >
> > On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
> > shegg...@redhat.com> wrote:
> >
> >> Hi,
> >>
> >> Which version of gluster is in use?
> >>
> >> Regards,
> >>
> >> Sunil kumar Acharya
> >>
> >> Red Hat
> >>
> >> 
> >>
> >> T: +91-8067935170
> >> 
> >>
> >> 
> >> TRIED. TESTED. TRUSTED. 
> >>
> >>
> >>
> >> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison 
> >> wrote:
> >>
> >>> Hello All,
> >>>
> >>> We have 3 servers with a raid 50 array each, we are having extreme
> >>> performance issues with our gluster, writes on gluster seem to take at
> >>> least 3 times longer than on the raid directly. Can this be improved?
> >>> I've
> >>> read through several other performance issues threads but have been
> >>> unable
> >>> to make any improvements
> >>>
> >>> "gluster volume info" and "gluster volume profile vmstore info" is
> >>> below
> >>>
> >>>
> >>>
> =
> >>>
> >>> -Inside Gluster - test took 35+ hours:
> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
> >>> 600G -n 0 -m TEST -f -b -u root
> >>> Using uid:0, gid:0.
> >>> Writing intelligently...done
> >>> Rewriting...done
> >>> Reading intelligently...done
> >>> start 'em...done...done...done...done...done...
> >>> Version  1.98   --Sequential Output-- --Sequential Input-
> >>> --Random-
> >>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> >>> --Seeks--
> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> >>>  /sec %CP
> >>> TEST   600G   35.7m  17 5824k   7112m  13
> >>> 182.7   6
> >>> Latency5466ms   12754ms  3499ms
> >>>  1589ms
> >>>
> >>>
> >>>
> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
> >>>
> >>>
> >>>
> =
> >>>
> >>> -Outside Gluster - test took 18 minutes:
> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
> >>> 600G -n 0 -m TEST -f -b -u root
> >>> Using uid:0, gid:0.
> >>> Writing intelligently...done
> >>> Rewriting...done
> >>> Reading intelligently...done
> >>> start 'em...done...done...done...done...done...
> >>> Version  1.98   --Sequential Output-- --Sequential Input-
> >>> --Random-
> >>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> >>> --Seeks--
> >>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
> >>>  /sec %CP
> >>> TEST   600G567m  78  149m  30307m  37
> >>>  83.0  57
> >>> Latency 205ms4630ms  1450ms
> >>> 679ms
> >>>
> >>>
> >>>
> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
> >>>
> >>>
> >>>
> =
> >>>
> >>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume
> info
> >>> Volume Name: engine
> >>> Type: Replicate
> >>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
> >>> Status: Started
> >>> Snapshot Count: 0
> >>> Number of Bricks: 1 x 3 = 3
> >>> Transport-type: tcp
> >>> Bricks:
> >>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
> >>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
> >>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
> >>> Options Reconfigured:
> >>> cluster.granular-entry-heal: enable
> >>> performance.strict-o-direct: on
> >>> network.ping-timeout: 30
> >>> storage.owner-gid: 36
> >>> storage.owner-uid: 36
> >>> server.event-threads: 4
> >>> client.event-threads: 4
> >>> cluster.choose-local: off
> >>> user.cifs: off
> >>> features.shard: on
> >>> cluster.shd-wait-qlength: 1
> >>> cluster.shd-max-threads: 8
> >>> cluster.locking-scheme: granular
> >>> cluster.data-self-heal-algorithm: full
> >>> cluster.server-quorum-type: server
> >>> cluster.quorum-type: auto
> >>> cluster.eager-lock: enable
> >>> network.remote-dio: off
> >>> performance.low-prio-threads: 32
> >>> performance.io-cache: off
> >>> 

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Derek Atkins
Hi,

Another question which I don't see answered:   What is the underlying
connectivity between the Gluster hosts?

-derek

On Wed, February 23, 2022 8:39 am, Alex Morrison wrote:
> Hello Sunil,
>
> [root@ovirt1 ~]# gluster --version
> glusterfs 8.6
>
> same on all hosts
>
> On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
> shegg...@redhat.com> wrote:
>
>> Hi,
>>
>> Which version of gluster is in use?
>>
>> Regards,
>>
>> Sunil kumar Acharya
>>
>> Red Hat
>>
>> 
>>
>> T: +91-8067935170
>> 
>>
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>>
>> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison 
>> wrote:
>>
>>> Hello All,
>>>
>>> We have 3 servers with a raid 50 array each, we are having extreme
>>> performance issues with our gluster, writes on gluster seem to take at
>>> least 3 times longer than on the raid directly. Can this be improved?
>>> I've
>>> read through several other performance issues threads but have been
>>> unable
>>> to make any improvements
>>>
>>> "gluster volume info" and "gluster volume profile vmstore info" is
>>> below
>>>
>>>
>>> =
>>>
>>> -Inside Gluster - test took 35+ hours:
>>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>>> 600G -n 0 -m TEST -f -b -u root
>>> Using uid:0, gid:0.
>>> Writing intelligently...done
>>> Rewriting...done
>>> Reading intelligently...done
>>> start 'em...done...done...done...done...done...
>>> Version  1.98   --Sequential Output-- --Sequential Input-
>>> --Random-
>>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>>> --Seeks--
>>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>>  /sec %CP
>>> TEST   600G   35.7m  17 5824k   7112m  13
>>> 182.7   6
>>> Latency5466ms   12754ms  3499ms
>>>  1589ms
>>>
>>>
>>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>>>
>>>
>>> =
>>>
>>> -Outside Gluster - test took 18 minutes:
>>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>>> 600G -n 0 -m TEST -f -b -u root
>>> Using uid:0, gid:0.
>>> Writing intelligently...done
>>> Rewriting...done
>>> Reading intelligently...done
>>> start 'em...done...done...done...done...done...
>>> Version  1.98   --Sequential Output-- --Sequential Input-
>>> --Random-
>>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>>> --Seeks--
>>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>>  /sec %CP
>>> TEST   600G567m  78  149m  30307m  37
>>>  83.0  57
>>> Latency 205ms4630ms  1450ms
>>> 679ms
>>>
>>>
>>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
>>>
>>>
>>> =
>>>
>>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume info
>>> Volume Name: engine
>>> Type: Replicate
>>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>>> Options Reconfigured:
>>> cluster.granular-entry-heal: enable
>>> performance.strict-o-direct: on
>>> network.ping-timeout: 30
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> server.event-threads: 4
>>> client.event-threads: 4
>>> cluster.choose-local: off
>>> user.cifs: off
>>> features.shard: on
>>> cluster.shd-wait-qlength: 1
>>> cluster.shd-max-threads: 8
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> performance.low-prio-threads: 32
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> transport.address-family: inet
>>> storage.fips-mode-rchecksum: on
>>> nfs.disable: on
>>> performance.client-io-threads: on
>>> diagnostics.latency-measurement: on
>>> diagnostics.count-fop-hits: on
>>>
>>> Volume Name: vmstore
>>> Type: Replicate
>>> Volume ID: 2670ff29-8d43-4610-a437-c6ec2c235753
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> 

[ovirt-users] Re: Gluster Performance issues

2022-02-23 Thread Alex Morrison
Hello Sunil,

[root@ovirt1 ~]# gluster --version
glusterfs 8.6

same on all hosts

On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
shegg...@redhat.com> wrote:

> Hi,
>
> Which version of gluster is in use?
>
> Regards,
>
> Sunil kumar Acharya
>
> Red Hat
>
> 
>
> T: +91-8067935170 
>
> 
> TRIED. TESTED. TRUSTED. 
>
>
>
> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison 
> wrote:
>
>> Hello All,
>>
>> We have 3 servers with a raid 50 array each, we are having extreme
>> performance issues with our gluster, writes on gluster seem to take at
>> least 3 times longer than on the raid directly. Can this be improved? I've
>> read through several other performance issues threads but have been unable
>> to make any improvements
>>
>> "gluster volume info" and "gluster volume profile vmstore info" is below
>>
>>
>> =
>>
>> -Inside Gluster - test took 35+ hours:
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> 600G -n 0 -m TEST -f -b -u root
>> Using uid:0, gid:0.
>> Writing intelligently...done
>> Rewriting...done
>> Reading intelligently...done
>> start 'em...done...done...done...done...done...
>> Version  1.98   --Sequential Output-- --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>  /sec %CP
>> TEST   600G   35.7m  17 5824k   7112m  13
>> 182.7   6
>> Latency5466ms   12754ms  3499ms
>>  1589ms
>>
>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,5466ms,12754ms,,3499ms,1589ms,,
>>
>>
>> =
>>
>> -Outside Gluster - test took 18 minutes:
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> 600G -n 0 -m TEST -f -b -u root
>> Using uid:0, gid:0.
>> Writing intelligently...done
>> Rewriting...done
>> Reading intelligently...done
>> start 'em...done...done...done...done...done...
>> Version  1.98   --Sequential Output-- --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Name:Size etc/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>  /sec %CP
>> TEST   600G567m  78  149m  30307m  37
>>  83.0  57
>> Latency 205ms4630ms  1450ms
>> 679ms
>>
>>
>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,205ms,4630ms,,1450ms,679ms,,
>>
>>
>> =
>>
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume info
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>> Options Reconfigured:
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 1
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: on
>> performance.client-io-threads: on
>> diagnostics.latency-measurement: on
>> diagnostics.count-fop-hits: on
>>
>> Volume Name: vmstore
>> Type: Replicate
>> Volume ID: 2670ff29-8d43-4610-a437-c6ec2c235753
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Brick2: ovirt2-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Brick3: ovirt3-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Options Reconfigured:
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 20
>> storage.owner-gid: 36
>> 

[ovirt-users] Re: gluster and virtualization

2022-02-02 Thread Patrick Hibbs
You're getting multiple DMAR errors. That's related to your IOMMU
setup, which would be affected if you're turning VT on and off in the
BIOS. 

That's not really LVM so much as it is something trying to remap your
storage device's PCI link after the filesystem was mounted. (Whether by
LVM, systemd, mount cmd from the terminal, etc.)
Which will cause the underlying block device to become unresponsive.
Even worse, it can make the FS get stuck unmounting and prevent a
reboot from succeeding after all of the consoles have been killed.
Requiring someone to power cycle the machine manually if it cannot be
fenced via some power distribution unit. (Speaking from experience
here...)

As for the issue itself, there's a couple of things you can try:

Try booting the machine in question with "intel_iommu=on iommu=pt" on
the kernel command line. That will put the IOMMU in passthrough mode
which may help.

Try moving the physical drives to a different port on the motherboard.
Some boards have different IOMMU groups for different ports even if
they are of the same kind. Regardless if it's AHCI / M.2 / etc.
If you have a real PCI RAID expansion card or something similar, you
could try checking the PCI link id it's using and moving it to another
link that does work. (Plug it into another PCI slot so it gets a
different IOMMU group assignment.)
If you're willing to spend money, maybe try getting a PCI AHCI / RAID
expansion card if you don't have one. That would at least give you more
options if you cannot move the drives to a different port.

Long term, the best option would be to move those gluster bricks to
another host that isn't acting as a VM hypervisor. These kinds of bugs
can crop up with kernel updates, and as the kernel's IOMMU support is
still kinda iffy, production-wise it's better to avoid the issue
entirely.

-Patrick Hibbs

On Wed, 2022-02-02 at 12:51 +, Strahil Nikolov via Users wrote:
> Most probably when virtualization is enabled vdsm services can start
> and they create a lvm filter for your Gluster bricks.
> 
> Boot the system (most probably with virtualization disabled), move
> your entry from /etc/fstab to a dedicated '.mount' unit and boot with
> the virt enabled.
> 
> Once booted with the flag enabled -> check the situation (for example
> blacklist local disks in /etc/multipath/conf.d/blacklist.conf, check
> and adjust the LVM filter, etc).
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> > On Wed, Feb 2, 2022 at 11:52, eev...@digitaldatatechs.com
> >  wrote:
> > My setup is 3 ovirt nodes that run gluster independently of the
> > engine server, even thought the engine still controls it. So 4
> > nodes, one engine and 3 clustered nodes.
> > This has been and running with no issues except this:
> > But now my arbiter node will not load the gluster drive when
> > virtualization is enable in the BIOS. I've been scratching my head
> > on this and need some direction.
> > I am attaching the error.
> > 
> > https://1drv.ms/u/s!AvgvEzKKSZHbhMRQmUHDvv_Xv7dkhw?e=QGdfYR
> > 
> > Keep in mind, this error does not occur is VT is turned off..it
> > boots normally. 
> > 
> > Thanks in advance.
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXRDM6W3IRTSUK46FYZZR4JRR766B2AX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2EK2SJK3VTQZ4C626N4RVFT3XIXUA3WW/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PGSRI4KLNRV2L6Y4W4YW2JELVAYXICLL/


[ovirt-users] Re: gluster and virtualization

2022-02-02 Thread Strahil Nikolov via Users
Most probably when virtualization is enabled vdsm services can start and they 
create a lvm filter for your Gluster bricks.
Boot the system (most probably with virtualization disabled), move your entry 
from /etc/fstab to a dedicated '.mount' unit and boot with the virt enabled.
Once booted with the flag enabled -> check the situation (for example blacklist 
local disks in /etc/multipath/conf.d/blacklist.conf, check and adjust the LVM 
filter, etc).
Best Regards,Strahil Nikolov 
 
  On Wed, Feb 2, 2022 at 11:52, 
eev...@digitaldatatechs.com wrote:   My setup is 3 
ovirt nodes that run gluster independently of the engine server, even thought 
the engine still controls it. So 4 nodes, one engine and 3 clustered nodes.
This has been and running with no issues except this:
But now my arbiter node will not load the gluster drive when virtualization is 
enable in the BIOS. I've been scratching my head on this and need some 
direction.
I am attaching the error.

https://1drv.ms/u/s!AvgvEzKKSZHbhMRQmUHDvv_Xv7dkhw?e=QGdfYR

Keep in mind, this error does not occur is VT is turned off..it boots normally. 

Thanks in advance.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXRDM6W3IRTSUK46FYZZR4JRR766B2AX/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2EK2SJK3VTQZ4C626N4RVFT3XIXUA3WW/


[ovirt-users] Re: Gluster Hook differences between fresh and old clusters

2022-01-11 Thread Strahil Nikolov via Users
Even with symbolic link removed, it fails to detect the current hook status.
I hope I don't have to poke in the DB.
Best Regards,Strahil Nikolov
 
 
  On Tue, Jan 11, 2022 at 12:25, Ritesh Chikatwar wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWKYDRHVQ4MDGOJTL23AXMJB2PBH7TNU/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YKU3GWAHS6CSB2AZQ25OOW4VUZJA2FUW/


[ovirt-users] Re: Gluster Hook differences between fresh and old clusters

2022-01-11 Thread Ritesh Chikatwar
This gluster hook will not be used by ovirt with gluster storage. I am not
sure how this has been enabled. I think this is enabled because this hook
is pointing to a symbolic link.

On Mon, Jan 10, 2022 at 3:55 PM Strahil Nikolov 
wrote:

> Hi Ritesh ,
>
> I'm 90% confident it is a problem from the latest (4.3 to 4.4 ) or older
> migrations (4.2 to 4.3).
>
> [root@ovirt2 ~]# ll
> /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post
> lrwxrwxrwx. 1 root root 64 9 яну 23,55
> /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post ->
> /usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
> [root@ovirt2 ~]# ll -Z
> /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post
> lrwxrwxrwx. 1 root root unconfined_u:object_r:glusterd_var_lib_t:s0 64 9
> яну 23,55 /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post
> -> /usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
> [root@ovirt2 ~]# ls -lZ
> /usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
> -rwxr-xr-x. 1 root root system_u:object_r:bin_t:s0 1883 12 окт 13,09
> /usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
> [root@ovirt2 ~]# file
> /usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
> /usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py: Python
> script, ASCII text executable
>
>
> I've tried with SELINUX in permissive mode, so it's something not related
> to SELINUX. Also the sync works on the new cluster.
>
> Any idea how to debug it and find what is the reason it doesn't like it ?
>
> Best Regards,
> Strahil Nikolov
> В понеделник, 10 януари 2022 г., 08:12:17 Гринуич+2, Ritesh Chikatwar <
> rchik...@redhat.com> написа:
>
>
> Hello Strahil,
>
> I have a setup with version (4.4.9.3) but I don't see an issue, Maybe
> after migrating/upgrading. We are seeing this issue, can you share the
> content from this hook (delete-POST-57glusterfind-delete-post).
>
> On Mon, Jan 10, 2022 at 3:55 AM Strahil Nikolov via Users 
> wrote:
>
> Hi All,
>
> recently I have migrated from 4.3.10 to 4.4.9 and it seems something odd
> is happening.
>
> Symptoms:
> - A lot of warnings for Gluster hook discrepancies
> - Trying to refresh the hooks via the sync button fails (engine error:
> https://justpaste.it/827zo )
> - Existing "Default" cluster tracks more hooks than a fresh new cluster
> New cluster hooks: http://i.imgur.com/FEL2Z1D.png
> Migrated cluster: https://i.imgur.com/L8dWYZY.png
>
> What can I do to resolve the issue ? I've tried to resync the hooks, move
> away /var/lib/glusterd/hooks/1/ and reinstall gluster packages, try to
> resolve via the "Resolve Conflicts" in the UI and nothing helped so far.
>
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYSNQTAGXEAX2O677ELEAYRXDAUX52IQ/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BWKYDRHVQ4MDGOJTL23AXMJB2PBH7TNU/


[ovirt-users] Re: Gluster Hook differences between fresh and old clusters

2022-01-10 Thread Strahil Nikolov via Users
 Hi Ritesh ,

I'm 90% confident it is a problem from the latest (4.3 to 4.4 ) or older 
migrations (4.2 to 4.3).

[root@ovirt2 ~]# ll 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post 
lrwxrwxrwx. 1 root root 64 9 яну 23,55 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post -> 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
[root@ovirt2 ~]# ll -Z 
/var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post 
lrwxrwxrwx. 1 root root unconfined_u:object_r:glusterd_var_lib_t:s0 64 9 яну 
23,55 /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post -> 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
[root@ovirt2 ~]# ls -lZ 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
-rwxr-xr-x. 1 root root system_u:object_r:bin_t:s0 1883 12 окт 13,09 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
[root@ovirt2 ~]# file 
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py
/usr/libexec/glusterfs/glusterfind/S57glusterfind-delete-post.py: Python 
script, ASCII text executable


I've tried with SELINUX in permissive mode, so it's something not related to 
SELINUX. Also the sync works on the new cluster.

Any idea how to debug it and find what is the reason it doesn't like it ?

Best Regards,
Strahil Nikolov
 В понеделник, 10 януари 2022 г., 08:12:17 Гринуич+2, Ritesh Chikatwar 
 написа:  
 
 Hello Strahil,
I have a setup with version (4.4.9.3) but I don't see an issue, Maybe after 
migrating/upgrading. We are seeing this issue, can you share the content from 
this hook (delete-POST-57glusterfind-delete-post).
On Mon, Jan 10, 2022 at 3:55 AM Strahil Nikolov via Users  
wrote:

Hi All,

recently I have migrated from 4.3.10 to 4.4.9 and it seems something odd is 
happening.

Symptoms:
- A lot of warnings for Gluster hook discrepancies
- Trying to refresh the hooks via the sync button fails (engine error: 
https://justpaste.it/827zo )
- Existing "Default" cluster tracks more hooks than a fresh new cluster 
New cluster hooks: http://i.imgur.com/FEL2Z1D.png
Migrated cluster: https://i.imgur.com/L8dWYZY.png

What can I do to resolve the issue ? I've tried to resync the hooks, move away 
/var/lib/glusterd/hooks/1/ and reinstall gluster packages, try to resolve via 
the "Resolve Conflicts" in the UI and nothing helped so far.


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYSNQTAGXEAX2O677ELEAYRXDAUX52IQ/

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WDH6EQFSPHVMZLHXBVIQ3DCCZZDXIL23/


[ovirt-users] Re: Gluster Hook differences between fresh and old clusters

2022-01-09 Thread Ritesh Chikatwar
Hello Strahil,

I have a setup with version (4.4.9.3) but I don't see an issue, Maybe after
migrating/upgrading. We are seeing this issue, can you share the content
from this hook (delete-POST-57glusterfind-delete-post).

On Mon, Jan 10, 2022 at 3:55 AM Strahil Nikolov via Users 
wrote:

> Hi All,
>
> recently I have migrated from 4.3.10 to 4.4.9 and it seems something odd
> is happening.
>
> Symptoms:
> - A lot of warnings for Gluster hook discrepancies
> - Trying to refresh the hooks via the sync button fails (engine error:
> https://justpaste.it/827zo )
> - Existing "Default" cluster tracks more hooks than a fresh new cluster
> New cluster hooks: http://i.imgur.com/FEL2Z1D.png
> Migrated cluster: https://i.imgur.com/L8dWYZY.png
>
> What can I do to resolve the issue ? I've tried to resync the hooks, move
> away /var/lib/glusterd/hooks/1/ and reinstall gluster packages, try to
> resolve via the "Resolve Conflicts" in the UI and nothing helped so far.
>
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RYSNQTAGXEAX2O677ELEAYRXDAUX52IQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4D4UERJWHR3LP5LHYTJYWZMHJTQQFUHN/


[ovirt-users] Re: Gluster Install Fail again :(

2021-10-30 Thread Strahil Nikolov via Users
OK, that's odd .
Can you check the following:
On all nodes:
grep storage[1-3].private /etc/hosts
for i in {1..3}; do host storage${i}.private.net; done
On the first node:
gluster peer probe storage1.private.netgluster peer probe 
storage2.private.netgluster peer probe storage3.private.net
gluster pool list
gluster peer status
Best Regards,Strahil Nikolov
 
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL452O2NMF4ULZI2FIV2Y5LGRL2QYW2L/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IARG3H7ZIQDSOMJIY7VC3AAFKAOT3MCH/


[ovirt-users] Re: Gluster Install Fail again :(

2021-10-30 Thread admin
gluster pool list

UUIDHostnameState

1d17652f-f567-4a6d-9953-e0908ef5e361localhost   Connected

 

gluster pool list

UUIDHostnameState

612be7ce-6673-433e-ac86-bcca93636d64localhost   Connected

 

gluster pool list

UUIDHostnameState

772faa4f-44d4-45a7-8524-a7963798757blocalhost   Connected

 

gluster peer status

Number of Peers: 0

 

cat cmd_history.log

[2021-10-29 17:33:22.934750]  : peer probe storage1.private.net : SUCCESS : 
Probe on localhost not needed

[2021-10-29 17:33:23.162993]  : peer probe storage2.private.net : SUCCESS

[2021-10-29 17:33:23.498094]  : peer probe storage3.private.net : SUCCESS

[2021-10-29 17:33:24.918421]  : volume create engine replica 3 transport tcp 
storage1.private.net:/gluster_bricks/engine/engine 
storage2.private.net:/gluster_bricks/engine/engine 
storage3.private.net:/gluster_bricks/engine/engine force : FAILED : Staging 
failed on storage3.private.net. Error: Host storage1.private.net not connected

Staging failed on storage2.private.net. Error: Host storage1.private.net not 
connected

[2021-10-29 17:33:28.226387]  : peer probe storage1.private.net : SUCCESS : 
Probe on localhost not needed

[2021-10-29 17:33:30.618435]  : volume create data replica 3 transport tcp 
storage1.private.net:/gluster_bricks/data/data 
storage2.private.net:/gluster_bricks/data/data 
storage3.private.net:/gluster_bricks/data/data force : FAILED : Staging failed 
on storage2.private.net. Error: Host storage1.private.net not connected

Staging failed on storage3.private.net. Error: Host storage1.private.net not 
connected

[2021-10-29 17:33:33.923032]  : peer probe storage1.private.net : SUCCESS : 
Probe on localhost not needed

[2021-10-29 17:33:38.656356]  : volume create vmstore replica 3 transport tcp 
storage1.private.net:/gluster_bricks/vmstore/vmstore 
storage2.private.net:/gluster_bricks/vmstore/vmstore 
storage3.private.net:/gluster_bricks/vmstore/vmstore force : FAILED : Staging 
failed on storage3.private.net. Error: Host storage1.private.net not connected

Staging failed on storage2.private.net. Error: Host storage1.private.net is not 
in 'Peer in Cluster' state

[2021-10-29 17:49:40.696944]  : peer detach storage2.private.net : SUCCESS

[2021-10-29 17:49:43.787922]  : peer detach storage3.private.net : SUCCESS

 

OK this is what I have so far, still looking for the complete ansible log.

 

Brad

 

From: Strahil Nikolov  
Sent: October 30, 2021 10:27 AM
To: ad...@foundryserver.com; users@ovirt.org
Subject: Re: [ovirt-users] Gluster Install Fail again :(

 

What is the output of :

gluster peer list (from all nodes)

 

Output from the ansible will be useful.

 

 

Best Regards,

Strahil Nikolov

I have been working on getting this up and running for about a week now and I 
am totally frustrated.  I am not sure even where to begin.  Here is the error I 
get when it fails,

 

TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] ***

 

An exception occurred during task execution. To see the full traceback, use 
-vvv. The error was: NoneType: None

failed: [storage1.private.net] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "error running 
gluster (/usr/sbin/gluster --mode=script volume create engine replica 3 
transport tcp storage1.private.net:/gluster_bricks/engine/engine 
storage2.private.net:/gluster_bricks/engine/engine 
storage3.private.net:/gluster_bricks/engine/engine force) command (rc=1): 
volume create: engine: failed: Staging failed on storage3.private.net. Error: 
Host storage1.private.net not connected\nStaging failed on 
storage2.private.net. Error: Host storage1.private.net not connected\n"}

 

An exception occurred during task execution. To see the full traceback, use 
-vvv. The error was: NoneType: None

failed: [storage1.private.net] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "error running gluster (/usr/sbin/gluster 
--mode=script volume create data replica 3 transport tcp 
storage1.private.net:/gluster_bricks/data/data 
storage2.private.net:/gluster_bricks/data/data 
storage3.private.net:/gluster_bricks/data/data force) command (rc=1): volume 
create: data: failed: Staging failed on storage2.private.net. Error: Host 
storage1.private.net not connected\nStaging failed on storage3.private.net. 
Error: Host storage1.private.net not connected\n"}

 

An exception occurred during task execution. To see the full traceback, use 
-vvv. The error was: NoneType: None

failed: [storage1.private.net] (item={'volname': 

[ovirt-users] Re: Gluster Install Fail again :(

2021-10-30 Thread Strahil Nikolov via Users
What is the output of :gluster peer list (from all nodes)
Output from the ansible will be useful.

Best Regards,Strahil Nikolov
 
 
I have been working on getting this up and running for about a week now and 
I am totally frustrated.  I am not sure even where to begin.  Here is the error 
I get when it fails,

TASK [gluster.features/roles/gluster_hci : Create the GlusterFS volumes] ***

An exception occurred during task execution. To see the full traceback, use 
-vvv. The error was: NoneType: None
failed: [storage1.private.net] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "error running 
gluster (/usr/sbin/gluster --mode=script volume create engine replica 3 
transport tcp storage1.private.net:/gluster_bricks/engine/engine 
storage2.private.net:/gluster_bricks/engine/engine 
storage3.private.net:/gluster_bricks/engine/engine force) command (rc=1): 
volume create: engine: failed: Staging failed on storage3.private.net. Error: 
Host storage1.private.net not connected\nStaging failed on 
storage2.private.net. Error: Host storage1.private.net not connected\n"}

An exception occurred during task execution. To see the full traceback, use 
-vvv. The error was: NoneType: None
failed: [storage1.private.net] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": false, "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "error running gluster (/usr/sbin/gluster 
--mode=script volume create data replica 3 transport tcp 
storage1.private.net:/gluster_bricks/data/data 
storage2.private.net:/gluster_bricks/data/data 
storage3.private.net:/gluster_bricks/data/data force) command (rc=1): volume 
create: data: failed: Staging failed on storage2.private.net. Error: Host 
storage1.private.net not connected\nStaging failed on storage3.private.net. 
Error: Host storage1.private.net not connected\n"}

An exception occurred during task execution. To see the full traceback, use 
-vvv. The error was: NoneType: None
failed: [storage1.private.net] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": false, "item": {"arbiter": 0, "brick": 
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "error running 
gluster (/usr/sbin/gluster --mode=script volume create vmstore replica 3 
transport tcp storage1.private.net:/gluster_bricks/vmstore/vmstore 
storage2.private.net:/gluster_bricks/vmstore/vmstore 
storage3.private.net:/gluster_bricks/vmstore/vmstore force) command (rc=1): 
volume create: vmstore: failed: Staging failed on storage3.private.net. Error: 
Host storage1.private.net not connected\nStaging failed on 
storage2.private.net. Error: Host storage1.private.net is not in 'Peer in 
Cluster' state\n"}

Here are the facts.

using 4.4.9 of ovirt.
using ovirtnode os 
partion for gluster:  /dev/vda4  > 4T in unformatted space.

able to ssh into each host on the private.net and known hosts and fqdn passes 
fine.

On the volume  page:
all default settings.

On the bricks page:
JBOD / Blacklist true / storage host  storage1.private.net / default lvm except 
the device is /dev/sda4 

I really need to get this setup.  The first failure was the filter error, so I 
edited the /etc/lvm/lvm.conf  to comment out the filter line.  Then without 
doing a clean up I reran the deployment and got the above error.  

Thanks in advance
Brad

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SAYZ3STV3ILDE42T6JUXLKVHSIX7LRI5/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ONOEUHHYL6YVIAREMQQLDWSUY3PR2RWO/


[ovirt-users] Re: gluster 5834 Unsynced entries present

2021-10-01 Thread Dominique Deschênes

Thank you very much, it took a few minutes but now I don't have any more 
Unsynced entries.


[root@ovnode2 glusterfs]# gluster volume heal datassd info | grep entries | 
sort | uniq -c
3 Number of entries: 0




Dominique D

- Message reçu -
De: Strahil Nikolov via Users (users@ovirt.org)
Date: 01/10/21 11:05
À: Dominique D (dominique.desche...@gcgenicom.com), users@ovirt.org
Objet: [ovirt-users] Re: gluster 5834 Unsynced entries present

Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs 
evacuate and the host is really in maintenance and activate it back.


Restarting the glusterd also should do the trick, but it's always better to 
ensure no gluster processes have been left running(inclusing the mount points.

Best Regards,
Strahil Nikolov


On Fri, Oct 1, 2021 at 17:06, Dominique D
 wrote:
yesterday I had a glich and my second ovnode2 server restarted
here are some errors in the events :
VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for 
host 'ovnode3.telecom.lan', last response arrived 2455 ms ago.
Host ovnode3.telecom.lan is not responding. It will stay in Connecting state 
for a grace period of 86 seconds and after that an attempt to fence the host 
will be issued.
Invalid status on Data Center Default. Setting Data Center status to Non 
Responsive (On host ovnode3.telecom.lan, Error: Network error during 
communication with the Host.).
Executing power management status on Host ovnode3.telecom.lan using Proxy Host 
ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16.
Now my 3 bricks have errors from my gluster volume

[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick ovnode1s.telecom.lan:/gluster_bricks/
datassd/datassd                            49152    0          Y      4027
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd                            49153    0          Y      2393
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd                            49152    0          Y      2347
Self-heal Daemon on localhost              N/A      N/A        Y      2405
Self-heal Daemon on ovnode3s.telecom.lan    N/A      N/A        Y      2366
Self-heal Daemon on 172.16.70.91            N/A      N/A        Y      4043
Task Status of Volume datassd
--
There are no active volume tasks

gluster volume heal datassd info | grep -i "Number of entries:" | grep -v 
"entries: 0"
Number of entries: 5759
in the webadmin all the bricks are green with comments for two :
ovnode1 Up, 5834 Unsynced entries present
ovnode2 Up,
ovnode3 Up, 5820 Unsynced entries present
I tried this without success
gluster volume heal datassd
Launching heal operation to perform index self heal on volume datassd has been 
unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file 
for details.
What are the next steps ?
Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRI2K34O2X3NEEYLWTZJYG26EYH6CJQU/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4SWBS2VMHJC6JCWARQI5SHIQQJVJ6GQ/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OW7Z24SU3F3GFIWCD75SQJJS62ITIZAM/


[ovirt-users] Re: gluster 5834 Unsynced entries present

2021-10-01 Thread Staniforth, Paul
Hi Dominique,

what's the output of


gluster volume heal datassd info summary



Regards,
 Paul S.





From: Dominique D 
Sent: 01 October 2021 15:05
To: users@ovirt.org 
Subject: [ovirt-users] gluster 5834 Unsynced entries present

Caution External Mail: Do not click any links or open any attachments unless 
you trust the sender and know that the content is safe.

yesterday I had a glich and my second ovnode2 server restarted

here are some errors in the events :

VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for 
host 'ovnode3.telecom.lan', last response arrived 2455 ms ago.
Host ovnode3.telecom.lan is not responding. It will stay in Connecting state 
for a grace period of 86 seconds and after that an attempt to fence the host 
will be issued.
Invalid status on Data Center Default. Setting Data Center status to Non 
Responsive (On host ovnode3.telecom.lan, Error: Network error during 
communication with the Host.).
Executing power management status on Host ovnode3.telecom.lan using Proxy Host 
ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16.

Now my 3 bricks have errors from my gluster volume


[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick ovnode1s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0  Y   4027
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd 49153 0  Y   2393
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd 49152 0  Y   2347
Self-heal Daemon on localhost   N/A   N/AY   2405
Self-heal Daemon on ovnode3s.telecom.lanN/A   N/AY   2366
Self-heal Daemon on 172.16.70.91N/A   N/AY   4043

Task Status of Volume datassd
--
There are no active volume tasks


gluster volume heal datassd info | grep -i "Number of entries:" | grep -v 
"entries: 0"
Number of entries: 5759

in the webadmin all the bricks are green with comments for two :

ovnode1 Up, 5834 Unsynced entries present
ovnode2 Up,
ovnode3 Up, 5820 Unsynced entries present

I tried this without success

gluster volume heal datassd
Launching heal operation to perform index self heal on volume datassd has been 
unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file 
for details.

What are the next steps ?

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fprivacy-policy.htmldata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C7e3e7202589d44541fb208d984e4cb01%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637686940626114012%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=IVUS1IiRFsB0vOTP23xT6y7ZubJ0yeJoVjP8uXxB%2FRY%3Dreserved=0
oVirt Code of Conduct: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C7e3e7202589d44541fb208d984e4cb01%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637686940626114012%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=sIOP8O%2FX797XXV1rLhas0jQoSg%2BypYtoZZdCka37RaM%3Dreserved=0
List Archives: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FQRI2K34O2X3NEEYLWTZJYG26EYH6CJQU%2Fdata=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C7e3e7202589d44541fb208d984e4cb01%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637686940626114012%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=1ogTZDbajfjOBxJY5Iudrd5QFNhDyULgAIS%2Bz1Q%2BSBw%3Dreserved=0
To view the terms under which this email is distributed, please go to:-
https://leedsbeckett.ac.uk/disclaimer/email
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BSSS4YHEPN2BTCK7U6XKIT777JUGLQGF/


[ovirt-users] Re: gluster 5834 Unsynced entries present

2021-10-01 Thread Strahil Nikolov via Users
Put ovnode2 in maintenance (put a tick for stopping gluster), wait till all VMs 
evacuate and the host is really in maintenance and activate it back.
Restarting the glusterd also should do the trick, but it's always better to 
ensure no gluster processes have been left running(inclusing the mount points.
Best Regards,Strahil Nikolov
 
 
  On Fri, Oct 1, 2021 at 17:06, Dominique D 
wrote:   yesterday I had a glich and my second ovnode2 server restarted 

here are some errors in the events :

VDSM ovnode3.telecom.lan command SpmStatusVDS failed: Connection timeout for 
host 'ovnode3.telecom.lan', last response arrived 2455 ms ago.
Host ovnode3.telecom.lan is not responding. It will stay in Connecting state 
for a grace period of 86 seconds and after that an attempt to fence the host 
will be issued.
Invalid status on Data Center Default. Setting Data Center status to Non 
Responsive (On host ovnode3.telecom.lan, Error: Network error during 
communication with the Host.).
Executing power management status on Host ovnode3.telecom.lan using Proxy Host 
ovnode1.telecom.lan and Fence Agent ipmilan:10.5.1.16.

Now my 3 bricks have errors from my gluster volume 


[root@ovnode2 ~]# gluster volume status
Status of volume: datassd
Gluster process                            TCP Port  RDMA Port  Online  Pid
--
Brick ovnode1s.telecom.lan:/gluster_bricks/
datassd/datassd                            49152    0          Y      4027
Brick ovnode2s.telecom.lan:/gluster_bricks/
datassd/datassd                            49153    0          Y      2393
Brick ovnode3s.telecom.lan:/gluster_bricks/
datassd/datassd                            49152    0          Y      2347
Self-heal Daemon on localhost              N/A      N/A        Y      2405
Self-heal Daemon on ovnode3s.telecom.lan    N/A      N/A        Y      2366
Self-heal Daemon on 172.16.70.91            N/A      N/A        Y      4043

Task Status of Volume datassd
--
There are no active volume tasks


gluster volume heal datassd info | grep -i "Number of entries:" | grep -v 
"entries: 0"
Number of entries: 5759

in the webadmin all the bricks are green with comments for two : 

ovnode1 Up, 5834 Unsynced entries present
ovnode2 Up,
ovnode3 Up, 5820 Unsynced entries present

I tried this without success 

gluster volume heal datassd 
Launching heal operation to perform index self heal on volume datassd has been 
unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file 
for details.

What are the next steps ? 

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRI2K34O2X3NEEYLWTZJYG26EYH6CJQU/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4SWBS2VMHJC6JCWARQI5SHIQQJVJ6GQ/


[ovirt-users] Re: Gluster bricks error

2021-07-24 Thread Strahil Nikolov via Users
Untill you invest some time and identify which part has failed, you will never 
resolve the issue.I'm pretty convinced that another part of the puzzle has 
failed and is your "gluster" problem.
Next time check if bricks are properly mounted, gluster brick log for errors, 
etc.

Best Regards,Strahil Nikolov
 
 
  On Fri, Jul 23, 2021 at 9:03, Patrick Lomakin 
wrote:   I am interested in why brick does not come up after a host reboot. I 
have not focused on fault tolerance yet.After a reboot I have to turn off and 
on volume, then, all starts working.  In my opinion this should happen 
automatically but it doesn't. This also happens if you use a volume with 
replication
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VNORJJWYME5TMOE2TTO4GQXAFMKTIV7/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HJBMYUUQL6UQORDS2M4AHRDGF2GTOE7/


[ovirt-users] Re: Gluster bricks error

2021-07-23 Thread Patrick Lomakin
I am interested in why brick does not come up after a host reboot. I have not 
focused on fault tolerance yet.After a reboot I have to turn off and on volume, 
then, all starts working.  In my opinion this should happen automatically but 
it doesn't. This also happens if you use a volume with replication
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6VNORJJWYME5TMOE2TTO4GQXAFMKTIV7/


[ovirt-users] Re: Gluster bricks error

2021-07-22 Thread Strahil Nikolov via Users
And how do you recover from such situation ? What was the root cause ?

Best Regards,
Strahil Nikolov




В четвъртък, 22 юли 2021 г., 08:01:07 ч. Гринуич+3, Patrick Lomakin 
 написа: 





I deployed Ovirt on a single node to test the operation of glusterfs (Single 
node HCI). After deploying, I got a single volume consisting of one bricks in 
"distributed" mode

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAIJ5BBZBFLYQF47PZUGGZCJIJBCPBOD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6FXDN5RRX3MIRDG7BE3OVHLAMGA4RAHJ/


[ovirt-users] Re: Gluster bricks error

2021-07-22 Thread Strahil Nikolov via Users
Did your host crash ?
When the host crashes the VG might not get activates , or the VDO service might 
still be recovering when the LVM is trying to start your LVs.

It rarely happens that the brick doesn't start but the Filesystem is mounted 
and Glusterd has been started.


Best Regards,
Strahil Nikolov






В сряда, 21 юли 2021 г., 10:31:59 ч. Гринуич+3, Patrick Lomakin 
 написа: 





For a long time I have been seeing the same error, which cannot be 
corrected.After restarting the host which has a Volume of one or more Bricks, 
the Volume starts with the status "Online", but the Bricks remain "Offline". 
This leads to having to manually restart Volume, the ovirt-ha and ovirt-broker 
services, and run the hosted-engine --connect-storage command. And only after 
that I can start the hosted engine back to normal. I tried this on different 
server hardware and different operating systems for the host, but the result is 
the same. This is a very serious flaw that nullifies the high availability in 
HCI using GlusterFS.Regards!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O6LA5WAZPQDVDVKBQM4EYA7MHBEZDGZ6/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RGGDBSDVXAWFUPQMRDJZLOZ7VOUWJUK/


[ovirt-users] Re: Gluster bricks error

2021-07-21 Thread Patrick Lomakin
I deployed Ovirt on a single node to test the operation of glusterfs (Single 
node HCI). After deploying, I got a single volume consisting of one bricks in 
"distributed" mode
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MAIJ5BBZBFLYQF47PZUGGZCJIJBCPBOD/


[ovirt-users] Re: Gluster bricks error

2021-07-21 Thread tbural
Hello. What the replica count you using on that gluster volume?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H42PBPXV24BD6KS3S2ILN34FTV6WD5SY/


[ovirt-users] Re: Gluster deploy error!

2021-07-10 Thread Strahil Nikolov via Users
The engine volume is not supposed to be such big. I think 100GB is enough.
Don't forget to always separate engine from vmstore volumes.

Best Regards,Strahil Nikolov
 
 
  On Fri, Jul 9, 2021 at 23:08, Patrick Lomakin 
wrote:   Hello! I have tried to deploy single node vith gluster, but if select 
"Compression and dedublication" I get an error:

TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *
failed: [host01] (item={'vgname': 'gluster_vg_sda4', 'lvname': 
'gluster_lv_engine', 'size': '1970G'}) => {"ansible_index_var": "index", 
"ansible_loop_var": "item", "changed": false, "err": "  Volume group 
\"gluster_vg_sda4\" has insufficient free space (504319 extents): 504320 
required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": 
"1970G", "vgname": "gluster_vg_sda4"}, "msg": "Creating logical volume 
'gluster_lv_engine' failed", "rc": 5}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4WV4AYMIY53LFO7WRBM6LA3TVMTZW25C/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3LX354ZPBPDLYBSGPWWDGC3E7QB3FK3/


[ovirt-users] Re: Gluster volumes not healing (perhaps after host maintenance?)

2021-05-18 Thread Marco Fais
Hi David,

just spotted this post from a couple of weeks ago -- I have the same
problem (Gluster volume not healing) since the upgrade from 7.x to 8.4.
Same exact errors on glustershd.log -- and same errors if I try to heal
manually.

Typically I can get the volume healed by killing the specific brick
processes manually and forcing a volume start (to restart the failed
bricks).

Just wondering if you've got any progress on your side?

I have also tried to upgrade to 9.1 in one of the clusters (I have three
different ones affected) but didn't solve the issue.

Regards.
Marco

On Mon, 26 Apr 2021 at 21:55, David White via Users  wrote:

> I did have my /etc/hosts setup on all 3 of the oVirt Hosts in the format
> you described, with the exception of the trailing "host1" and "host2". I
> only had the FQDN in there.
>
> I had an outage of almost an hour this morning that may or may not be
> related to this. An "ETL Service" started, at which point a lot of things
> broke down, and I saw a lot of storage-related errors. Everything came back
> on its own, though.
>
> See my other thread that I just started on that topic.
> As of now, there are NOT indications that any of the volumes or disks are
> out of sync.
>
>
> Sent with ProtonMail  Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, April 25, 2021 1:43 AM, Strahil Nikolov via Users <
> users@ovirt.org> wrote:
>
> A/ & PTR records are pretty important.
> As long as you setup your /etc/hosts jn the format like this you will be
> OK:
>
> 10.10.10.10 host1.anysubdomain.domain host1
> 10.10.10.11 host2.anysubdomain.domain host2
>
> Usually the hostname is defined for each peer in the
> /var/lib/glusterd/peers. Can you check the contents on all nodes ?
>
> Best Regards,
> Strahil Nikolov
>
> On Sat, Apr 24, 2021 at 21:57, David White via Users
>  wrote:
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYPYALTFM7ITZZENSI6R5E6ZNT7TRY5Y/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NU6PXEUVVSCHVUIYTJRFOO72ZCJBWGVG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4YLOE6ZX4W4XXEY72Q5ZJIZDKMNPEDO2/


[ovirt-users] Re: Gluster Geo-Replication Fails

2021-05-18 Thread Strahil Nikolov via Users
Now to make it perfect , leave it running and analyze the AVCs with semanage.
In the end SELINUX will remain working and geo-rep should be running also.

I've previously tried the rpm generated from 
https://github.com/gluster/glusterfs-selinux but it didn't help at that time. 
If possible, give it a try.
Best Regards,Strahil Nikolov 
 
  On Tue, May 18, 2021 at 16:09, Simon Scott wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GB23MBKEQKZNKLHBUN2EC5VFLXVZKDCV/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y2DUGZNZMMHZ6AM3VH2DY4VZDCC7WF3T/


[ovirt-users] Re: Gluster Geo-Replication Fails

2021-05-18 Thread Simon Scott
Perfect, worked a treat - thanks Strahil 


From: Strahil Nikolov 
Sent: Tuesday 18 May 2021 04:10
To: Simon Scott ; users@ovirt.org 
Subject: Re: [ovirt-users] Re: Gluster Geo-Replication Fails

If you are running on EL8 -> It's the SELINUX.
To verify that,  stop the session and use 'setenforce 0' on both source and 
destination.

To make it work with SELINUX , you will need to use 'sealert -a' extensively 
(yum whatprovides '*/sealert').

Best Regards,
Strahil Nikolov

Typo - That's TWO sites...

___
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHPGXOENFSY6XILYSSXAX6CAQ6WFJVQ7/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GB23MBKEQKZNKLHBUN2EC5VFLXVZKDCV/


[ovirt-users] Re: Gluster Geo-Replication Fails

2021-05-17 Thread Strahil Nikolov via Users
If you are running on EL8 -> It's the SELINUX.To verify that,  stop the session 
and use 'setenforce 0' on both source and destination.
To make it work with SELINUX , you will need to use 'sealert -a' extensively 
(yum whatprovides '*/sealert').
Best Regards,Strahil Nikolov
 
 
Typo - That's TWO sites...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHPGXOENFSY6XILYSSXAX6CAQ6WFJVQ7/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXHQFKYCXWOTFM3Q2PKPCZD4CG3ZJITT/


[ovirt-users] Re: Gluster Geo-Replication Fails

2021-05-17 Thread simon
Typo - That's TWO sites...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHPGXOENFSY6XILYSSXAX6CAQ6WFJVQ7/


[ovirt-users] Re: Gluster volumes not healing (perhaps after host maintenance?)

2021-04-26 Thread David White via Users
I did have my /etc/hosts setup on all 3 of the oVirt Hosts in the format you 
described, with the exception of the trailing "host1" and "host2". I only had 
the FQDN in there.

I had an outage of almost an hour this morning that may or may not be related 
to this. An "ETL Service" started, at which point a lot of things broke down, 
and I saw a lot of storage-related errors. Everything came back on its own, 
though.

See my other thread that I just started on that topic.
As of now, there are NOT indications that any of the volumes or disks are out 
of sync.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Sunday, April 25, 2021 1:43 AM, Strahil Nikolov via Users  
wrote:

> A/ & PTR records are pretty important.
> As long as you setup your /etc/hosts jn the format like this you will be OK:
> 

> 10.10.10.10 host1.anysubdomain.domain host1
> 10.10.10.11 host2.anysubdomain.domain host2
> 

> Usually the hostname is defined for each peer in the /var/lib/glusterd/peers. 
> Can you check the contents on all nodes ?
> 

> Best Regards,
> Strahil Nikolov
> 

> > On Sat, Apr 24, 2021 at 21:57, David White via Users
> >  wrote:
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYPYALTFM7ITZZENSI6R5E6ZNT7TRY5Y/

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NU6PXEUVVSCHVUIYTJRFOO72ZCJBWGVG/


[ovirt-users] Re: Gluster volumes not healing (perhaps after host maintenance?)

2021-04-24 Thread Strahil Nikolov via Users
A/ & PTR records are pretty important.As long as you setup your /etc/hosts 
jn the format like this you will be OK:
10.10.10.10 host1.anysubdomain.domain host110.10.10.11 
host2.anysubdomain.domain host2
Usually the hostname is defined for each peer in the /var/lib/glusterd/peers. 
Can you check the contents on all nodes ?
Best Regards,Strahil Nikolov 
 
  On Sat, Apr 24, 2021 at 21:57, David White via Users wrote:  
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYPYALTFM7ITZZENSI6R5E6ZNT7TRY5Y/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WPTIX725OH43KE5FIK2I2G3H2FCMWEDH/


[ovirt-users] Re: Gluster volumes not healing (perhaps after host maintenance?)

2021-04-24 Thread David White via Users
As part of my troubleshooting earlier this morning, I gracefully shut down the 
ovirt-engine so that it would come up on a different host (can't remember if I 
mentioned that or not).

I just verified forward DNS on all 3 of the hosts.
All 3 resolve each other just fine, and are able to ping each other. The 
hostnames look good, too.

I'm fairly certain that this problem didn't exist prior to me shutting the host 
down and replacing the network card.

That said, I don't think I ever setup rdns / ptr records to begin with. I don't 
recall reading that rdns was a requirement, nor do I remember setting it up 
when I built the cluster a couple weeks ago. Is this a requirement?

I did setup forward dns entries into /etc/hosts on each server, though.

Sent with ProtonMail Secure Email.

‐‐‐ Original Message ‐‐‐
On Saturday, April 24, 2021 11:03 AM, Strahil Nikolov  
wrote:

> Hi David,
> 

> let's start with the DNS.
> Check that both nodes resolve each other (both A/ & PTR records).
> 

> If you set entries in /etc/hosts, check them out.
> 

> Also , check the output of 'hostname -s' & 'hostname -f' on both hosts.
> 

> Best Regards,
> Strahil Nikolov

publickey - dmwhite823@protonmail.com - 0x320CD582.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CYPYALTFM7ITZZENSI6R5E6ZNT7TRY5Y/


[ovirt-users] Re: Gluster volumes not healing (perhaps after host maintenance?)

2021-04-24 Thread Strahil Nikolov via Users
Hi David,

let's start with the DNS.Check that both nodes resolve each other (both A/ 
& PTR records).
If you set entries in /etc/hosts, check them out.
Also , check the output of 'hostname -s' & 'hostname -f' on both hosts.

Best Regards,Strahil Nikolov___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6S4SBF42ABLYDWJNBZEBBGNB3FLSL53W/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-11 Thread Strahil Nikolov via Users
Just move it away (to be on the safe side) and trigger a full heal.

Best Regards,
Strahil Nikolov






В сряда, 10 март 2021 г., 13:01:21 ч. Гринуич+2, Maria Souvalioti 
 написа: 






Should I delete the file and restart glusterd on the ov-no1 server?




Thank you very much




On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:


>  
It seems to me that ov-no1 didn't update the file properly. 



What was the output of the gluster volume heal command ?




Best Regards,

Strahil Nikolov


>  
>  
> 
>  The output of the getfattr command on the nodes was the following:
> 
> Node1:
> [root@ov-no1 ~]# getfattr -d -m . -e hex 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file: 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x0394
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> trusted.glusterfs.shard.file-size=0x00190092040b
> 
> 
> Node2:
> [root@ov-no2 ~]#  getfattr -d -m . -e hex 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file: 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x043a
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> trusted.glusterfs.shard.file-size=0x00190092040b
> 
> 
> Node3:
> [root@ov-no3 ~]#  getfattr -d -m . -e hex 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file: 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x0444
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> trusted.glusterfs.shard.file-size=0x00190092040b
>  
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ7URDMDQ7LDUPNS6YDBVAS/
>  
> 
> 
> 
> 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3ODLVEODDFWP3IVLPFNQXNLBCPPSZTR/

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-11 Thread Strahil Nikolov via Users
It seems that the affected file can be moved away on ov-no1.ariadne-t.local, as 
the other 2 bricks "blame" the entry on ov-no1.ariadne-t.local .
After that , you will need to "gluster volume heal  full" to 
trigger the heal.

Best Regards,
Strahil Nikolov 






В сряда, 10 март 2021 г., 12:58:10 ч. Гринуич+2, Maria Souvalioti 
 написа: 






The gluster volume heal engine command didn't output anything in the CLI.




The gluster volume heal engine info gives:





# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
 
Status: Connected
Number of entries: 1

Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
 
Status: Connected
Number of entries: 1   





And gluster volume heal engine info summary gives:  
   


    
# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0





Also I found the following warning message in the logs that has been repeating 
itself since the problem started:

[2021-03-10 10:08:11.646824] W [MSGID: 114061] 
[client-common.c:2644:client_pre_fsync_v2] 0-engine-client-0:  
(3fafabf3-d0cd-4b9a-8dd7-43145451f7cf) remote_fd is -1. EBADFD [File descriptor 
in bad state]




And from what I see in the logs, the healing process seems to be still trying 
to fix the volume. 





[2021-03-10 10:47:34.820229] I [MSGID: 108026] 
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0: Completed 
data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf. sources=1 [2]  sinks=0 
The message "I [MSGID: 108026] [afr-self-heal-common.c:1741:afr_log_selfheal] 
0-engine-replicate-0: Completed data selfheal on 
3fafabf3-d0cd-4b9a-8dd7-43145451f7cf. sources=1 [2]  sinks=0 " repeated 8 times 
between [2021-03-10 10:47:34.820229] and [2021-03-10 10:48:00.088805]









On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:


>  
It seems to me that ov-no1 didn't update the file properly. 



What was the output of the gluster volume heal command ?




Best Regards,

Strahil Nikolov


>  
>  
> 
>  The output of the getfattr command on the nodes was the following:
> 
> Node1:
> [root@ov-no1 ~]# getfattr -d -m . -e hex 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file: 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x0394
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> trusted.glusterfs.shard.file-size=0x00190092040b
> 
> 
> Node2:
> [root@ov-no2 ~]#  getfattr -d -m . -e hex 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file: 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-10 Thread Maria Souvalioti
Should I delete the file and restart glusterd on the ov-no1 server?


Thank you very much


On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:
> It seems to me that ov-no1 didn't update the file properly.
>
> What was the output of the gluster volume heal command ?
>
> Best Regards,
> Strahil Nikolov
>
> The output of the getfattr command on the nodes was the following:
>
> Node1:
> [root@ov-no1  ~]# getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x0394
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> Node2:
> [root@ov-no2  ~]#  getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x043a
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> Node3:
> [root@ov-no3  ~]#  getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x0444
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org
> 
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ7URDMDQ7LDUPNS6YDBVAS/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> 

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-10 Thread Maria Souvalioti
The gluster volume heal engine command didn't output anything in the CLI.


The gluster volume heal engine info gives:


# gluster volume heal engine info
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0

Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7

Status: Connected
Number of entries: 1

Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7

Status: Connected
Number of entries: 1  


And gluster volume heal engine info summary gives:

   



   

# gluster volume heal engine info summary
Brick ov-no1.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ov-no2.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ov-no3.ariadne-t.local:/gluster_bricks/engine/engine
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0


Also I found the following warning message in the logs that has been
repeating itself since the problem started:

[2021-03-10 10:08:11.646824] W [MSGID: 114061]
[client-common.c:2644:client_pre_fsync_v2] 0-engine-client-0: 
(3fafabf3-d0cd-4b9a-8dd7-43145451f7cf) remote_fd is -1. EBADFD [File
descriptor in bad state]


And from what I see in the logs, the healing process seems to be still
trying to fix the volume.


[2021-03-10 10:47:34.820229] I [MSGID: 108026]
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf.
sources=1 [2]  sinks=0
The message "I [MSGID: 108026]
[afr-self-heal-common.c:1741:afr_log_selfheal] 0-engine-replicate-0:
Completed data selfheal on 3fafabf3-d0cd-4b9a-8dd7-43145451f7cf.
sources=1 [2]  sinks=0 " repeated 8 times between [2021-03-10
10:47:34.820229] and [2021-03-10 10:48:00.088805]



On 3/10/21 10:21 AM, Strahil Nikolov via Users wrote:
> It seems to me that ov-no1 didn't update the file properly.
>
> What was the output of the gluster volume heal command ?
>
> Best Regards,
> Strahil Nikolov
>
> The output of the getfattr command on the nodes was the following:
>
> Node1:
> [root@ov-no1  ~]# getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x0394
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
> 
> trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
> trusted.glusterfs.shard.block-size=0x0400
> 
> trusted.glusterfs.shard.file-size=0x00190092040b
>
>
> Node2:
> [root@ov-no2  ~]#  getfattr -d -m . -e hex
> 
> /gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> getfattr: Removing leading '/' from absolute path names
> # file:
> 
> gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> 
> security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
> trusted.afr.dirty=0x
> trusted.afr.engine-client-0=0x043a
> trusted.afr.engine-client-2=0x
> trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
> 
> 

[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-10 Thread Strahil Nikolov via Users
It seems to me that ov-no1 didn't update the file properly.
What was the output of the gluster volume heal command ?
Best Regards,Strahil Nikolov
 
 
The output of the getfattr command on the nodes was the following:

Node1:
[root@ov-no1 ~]# getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x0394
trusted.afr.engine-client-2=0x
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b


Node2:
[root@ov-no2 ~]#  getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-0=0x043a
trusted.afr.engine-client-2=0x
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b


Node3:
[root@ov-no3 ~]#  getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-0=0x0444
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ7URDMDQ7LDUPNS6YDBVAS/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3ODLVEODDFWP3IVLPFNQXNLBCPPSZTR/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-09 Thread souvaliotimaria
The output of the getfattr command on the nodes was the following:

Node1:
[root@ov-no1 ~]# getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x0394
trusted.afr.engine-client-2=0x
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b


Node2:
[root@ov-no2 ~]#  getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-0=0x043a
trusted.afr.engine-client-2=0x
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b


Node3:
[root@ov-no3 ~]#  getfattr -d -m . -e hex 
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file: 
gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x
trusted.afr.engine-client-0=0x0444
trusted.gfid=0x3fafabf3d0cd4b9a8dd743145451f7cf
trusted.gfid2path.06f4f1065c7ed193=0x36313936323032302d386431342d343261372d613565332d3233346365656635343035632f61343835353566342d626532332d343436372d386135342d343030616537626166396437
trusted.glusterfs.mdata=0x015fec62872f5849585fec62872f5849585d791c1a00ba286e
trusted.glusterfs.shard.block-size=0x0400
trusted.glusterfs.shard.file-size=0x00190092040b
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUVBESAIZEJ7URDMDQ7LDUPNS6YDBVAS/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-09 Thread Maria Souvalioti

Sorry I run the getfattr command wrongly.

I run it again as

getfattr -d -m . -e hex
/gluster_bricks/engine/engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7

on each node and I got different results on the following attributes:


-trusted.afr.dirty
  It is 0x0394 on node 1, and
0x on the other two

-trusted.afr.engine-client-0
It is 0x043a on node 2 and 3, but node 1 doesn't
have it at all.

-trusted.afr.engine-client-2
It is 0x on node 1 and
0x0444 on node 2.
Node 3 doesn't have this entry at all.


Hope this helps.

Thanks for your help



On 3/9/2021 9:11 PM, Strahil Nikolov via Users wrote:

The output of the command seems quite wierd:  'getfattr -d -m . -e
hex file'
Is it the same on all nodes ?

Best Regards,
Strahil Nikolov

On Tue, Mar 9, 2021 at 15:36, Maria Souvalioti
 wrote:
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/OHK2ZRG5OESS3OGFSBQTZ66B5HF5X6G3/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MBG4A2DTXL5HW3REBHITRHKONVK6XZLW/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2E4W2D5LYGXZH4YBLRUY6CSKYLVFELJG/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-09 Thread Strahil Nikolov via Users
The output of the command seems quite wierd:  'getfattr -d -m . -e hex file' Is 
it the same on all nodes ?
Best Regards,Strahil Nikolov
 
 
  On Tue, Mar 9, 2021 at 15:36, Maria Souvalioti 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OHK2ZRG5OESS3OGFSBQTZ66B5HF5X6G3/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MBG4A2DTXL5HW3REBHITRHKONVK6XZLW/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-09 Thread Maria Souvalioti

The commandgetfattr -n replica.split-brain-status  gives the
following:

[root@ov-no1 ~]# getfattr -n replica.split-brain-status
/rhev/data-center/mnt/glusterSD/ov-no1.ariadne-t.local\:_engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file:
rhev/data-center/mnt/glusterSD/ov-no1.ariadne-t.local:_engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
replica.split-brain-status="The file is not under data or metadata
split-brain"

And getfattr -d -m . -e hex  command gives :

[root@ov-no1 ~]# getfattr -d -m . -e hex
/rhev/data-center/mnt/glusterSD/ov-no1.ariadne-t.local\:_engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
getfattr: Removing leading '/' from absolute path names
# file:
rhev/data-center/mnt/glusterSD/ov-no1.ariadne-t.local:_engine/80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000

Also, from what I can tell, in the GUI the brick seems to still be in
the healing process (since I run the dd command yesterday), as the
counters in self-heal info field change over time.

Thank you for your help


On 3/9/2021 7:33 AM, Strahil Nikolov via Users wrote:

Also check the status of the file on each brick with the getfattr
command (
see https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/
) and provide the output.

Best Regards,
Strahil Nikolov

Thank you for your reply.
I'm trying that right now and I see it triggered the self-healing
process.
I will come back with an update.
Best regards.

___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKW4RAVHVOZN6CZVK2TOC7727DHLKWRZ/




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OHK2ZRG5OESS3OGFSBQTZ66B5HF5X6G3/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-08 Thread Strahil Nikolov via Users
Also check the status of the file on each brick with the getfattr command ( see 
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/ ) and 
provide the output.
Best Regards,Strahil Nikolov
 
 
Thank you for your reply.
I'm trying that right now and I see it triggered the self-healing process. 
I will come back with an update.
Best regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKW4RAVHVOZN6CZVK2TOC7727DHLKWRZ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BENORJHFCW3XOX5ZP6ZJFQDXE2NPZGAI/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-08 Thread souvaliotimaria
Thank you for your reply.
I'm trying that right now and I see it triggered the self-healing process. 
I will come back with an update.
Best regards.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKW4RAVHVOZN6CZVK2TOC7727DHLKWRZ/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-08 Thread souvaliotimaria
Thank you. 
I have tried that and it didn't work as the system sees that the file is not in 
split-brain.
I have also tried force heal and full heal and still nothing. I always end up 
with the entry being stuck in unsynched stage.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W5AJ4PKEK36NZEIAPTX3UQD6P7EZM7EL/


[ovirt-users] Re: Gluster version upgrades

2021-03-07 Thread Strahil Nikolov via Users
I was always running gluster ahead of ovirt's version. Just ensure that there 
are no pending heals and you always check release notes before upgrading 
gluster.
Best Regards,Strahil Nikolov
 
 
  On Sun, Mar 7, 2021 at 9:15, Sketch wrote:   Is the gluster 
version on an oVirt host tied to the oVirt version, or 
would it be safe to upgrade to newer versions of gluster?

I have noticed gluster is often updated to new major versions on oVirt 
point release upgrades.  We have some compute+storage hosts on 4.3.6 which 
can't be upgraded easily at the moment, but we are having some gluster 
issues that appear to be due to bugs that I wonder if upgrading might 
help.  Would an in-place upgrade of gluster be a bad idea without also 
updating oVirt?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XR47MTPOQ6XTPT7TOH6LGEWYCH2YKRS2/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5KI3L4ZFS4A35VURRHCTE7TBWT2YTHSS/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-05 Thread Strahil Nikolov via Users
If it's a VM image, just use dd to read the whole file.dd 
if=VM_imageof=/dev/null bs=10M status=progress
Best Regards,Strahil Nikolov
 
 
  On Fri, Mar 5, 2021 at 15:48, Alex K wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RJO7EVEW2C3P7EYTAIXZVIC7JBSEXM3C/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7Y6SXCZDIYQH3MST72CX5FCGZW5QQKMR/


[ovirt-users] Re: Gluster volume engine stuck in healing with 1 unsynched entry & HostedEngine paused

2021-03-05 Thread Alex K
On Thu, Mar 4, 2021 at 8:59 PM  wrote:

> Hello again,
> I've tried to heal the brick with latest-mtime, but I get the following:
>
> gluster volume heal engine split-brain latest-mtime
> /80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> Healing
> /80f6e393-9718-4738-a14a-64cf43c3d8c2/images/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7
> failed: File not in split-brain.
> Volume heal failed.
>
you can try to run ls at the directory where the file which healing is
pending resides.  This might trigger the healing process of that file.


> Should I try the solution described in this question, where we manually
> remove the conflicting entry, triggering the heal operations?
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/thread/RPYIMSQCBYVQ654HYGBN5NCPRVCGRRYB/#H6EBSPL5XRLBUVZBE7DGSY25YFPIR2KY
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CCRNM7N3FSUYXDHFP2XDMGAMKSHBMJQQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RJO7EVEW2C3P7EYTAIXZVIC7JBSEXM3C/


  1   2   3   4   5   >