[ovirt-users] Re: IO Storage Error / All findings / Need help.

2020-02-24 Thread Hesham Ahmed
In my case I am continuing with oVirt node 4.3.8 based gluster 6.7 for the
time being. I have resolved the issue by manually copying all disk images
to a new gluster volume which took days specially since disks on gluster
still don't support sparse file copy. But the threat of a temporary network
failure bringing down the complete oVirt setup is a bit too much risk.

On Mon, Feb 24, 2020 at 9:29 PM Christian Reiss 
wrote:

> Hey,
>
> I do not have the faulty cluster anymore; It's a production environment
> with HA requirements so I really cant take it down for days or even
> worse, weeks.
>
> I am now running of CentOS7 (manual install) with manual Gluster 7.0
> installation and current ovirt. So far so good.
>
> Time will tell :)
>
> On 24/02/2020 18:11, Strahil Nikolov wrote:
> > On February 24, 2020 5:10:40 PM GMT+02:00, Hesham Ahmed <
> hsah...@gmail.com> wrote:
> >> My issue is with Gluster 6.7 (the default with oVirt 4.3.7) as is the
> >> case
> >> with Christian. I still have the failing volume and disks and can share
> >> any
> >> information required.
>
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XZTECBK4ZHSAVFKK36CO4D7CROPRHVB/


[ovirt-users] Re: IO Storage Error / All findings / Need help.

2020-02-24 Thread Hesham Ahmed
My issue is with Gluster 6.7 (the default with oVirt 4.3.7) as is the case
with Christian. I still have the failing volume and disks and can share any
information required.

On Mon, Feb 24, 2020 at 6:21 PM Strahil Nikolov 
wrote:

> On February 24, 2020 1:55:34 PM GMT+02:00, Hesham Ahmed 
> wrote:
> >Were you ever able to find a fix for this? I am facing the same problem
> >and
> >the case is similar to yours. We have a 6 node distributed-replicated
> >Gluster, due to a network issue all servers got disconnected and upon
> >recover one of the volumes started giving the same IO error. The files
> >can
> >be read as root but are giving error when read as vdsm. Everything else
> >is
> >as in your case including the oVirt versions. While doing a full dd
> >if=IMAGE of=/dev/null allows the disk to be mounted on one server
> >temporarily, upon reboot/restart it returns to failing with IO error. I
> >had
> >to create a completely new gluster volume and copy the disks from the
> >failing volume as root to resolve this.
> >
> >Did you create a bug report in Bugzilla for this?
> >
> >Regards,
> >
> >Hesham Ahmed
> >
> >On Wed, Feb 5, 2020 at 1:01 AM Christian Reiss
> >
> >wrote:
> >
> >> Thanks for replying,
> >>
> >> What I just wrote Stahil was:
> >>
> >>
> >> ACL is correctly set:
> >>
> >> # file: 5aab365f-b1b9-49d0-b011-566bf936a100
> >> # owner: vdsm
> >> # group: kvm
> >> user::rw-
> >> group::rw-
> >> other::---
> >>
> >> Doing a setfacl failed due to "Operation not supported", remounting
> >with
> >> acl, too:
> >>
> >> [root@node01 ~]# mount -o remount,acl
> >>
> >/rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net\:_ssd__storage/
> >> /bin/sh: glusterfs: command not found
> >>
> >> As I am running the oVirt node I am not sure how feasable
> >down/upgrading
> >> is. I think I am stuck with what I have.
> >>
> >> Also, if this would be a permission issue, I would not be able to
> >access
> >> the file at all. Seems I can access some of it. And all of it when
> >root
> >> loaded the whole file first.
> >>
> >>
> >> I also did, even if it was correctly set, the chown from the
> >mountpoint
> >> again, to no avail.
> >>
> >>
> >> On 04/02/2020 21:53, Christian Reiss wrote:
> >> >
> >> > ACL is correctly set:
> >> >
> >> > # file: 5aab365f-b1b9-49d0-b011-566bf936a100
> >> > # owner: vdsm
> >> > # group: kvm
> >> > user::rw-
> >> > group::rw-
> >> > other::---
> >> >
> >> > Doing a setfacl failed due to "Operation not supported", remounting
> >with
> >> > acl, too:
> >> >
> >> > [root@node01 ~]# mount -o remount,acl
> >> > /rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net
> >> \:_ssd__storage/
> >> > /bin/sh: glusterfs: command not found
> >> >
> >> > As I am running the oVirt node I am not sure how feasable
> >down/upgrading
> >> > is. I think I am stuck with what I have.
> >> >
> >> > Also, if this would be a permission issue, I would not be able to
> >access
> >> > the file at all. Seems I can access some of it. And all of it when
> >root
> >> > loaded the whole file first.
> >>
> >> --
> >> with kind regards,
> >> mit freundlichen Gruessen,
> >>
> >> Christian Reiss
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> >>
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2MOTXZU6AFAGV2GL6P5XBXHMCRFUM6F/
> >>
>
> If you mean the ACL issue -> check
> https://bugzilla.redhat.com/show_bug.cgi?id=1797099
> Ravi will be happy to have a setup that is already affected, so he can
> debug the issue.
> In my case , I have reverted to v7.0
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IP6TYZMHO5ZE2FSGZMC76H6MWYJKIQRP/


[ovirt-users] Re: IO Storage Error / All findings / Need help.

2020-02-24 Thread Hesham Ahmed
Were you ever able to find a fix for this? I am facing the same problem and
the case is similar to yours. We have a 6 node distributed-replicated
Gluster, due to a network issue all servers got disconnected and upon
recover one of the volumes started giving the same IO error. The files can
be read as root but are giving error when read as vdsm. Everything else is
as in your case including the oVirt versions. While doing a full dd
if=IMAGE of=/dev/null allows the disk to be mounted on one server
temporarily, upon reboot/restart it returns to failing with IO error. I had
to create a completely new gluster volume and copy the disks from the
failing volume as root to resolve this.

Did you create a bug report in Bugzilla for this?

Regards,

Hesham Ahmed

On Wed, Feb 5, 2020 at 1:01 AM Christian Reiss 
wrote:

> Thanks for replying,
>
> What I just wrote Stahil was:
>
>
> ACL is correctly set:
>
> # file: 5aab365f-b1b9-49d0-b011-566bf936a100
> # owner: vdsm
> # group: kvm
> user::rw-
> group::rw-
> other::---
>
> Doing a setfacl failed due to "Operation not supported", remounting with
> acl, too:
>
> [root@node01 ~]# mount -o remount,acl
> /rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net\:_ssd__storage/
> /bin/sh: glusterfs: command not found
>
> As I am running the oVirt node I am not sure how feasable down/upgrading
> is. I think I am stuck with what I have.
>
> Also, if this would be a permission issue, I would not be able to access
> the file at all. Seems I can access some of it. And all of it when root
> loaded the whole file first.
>
>
> I also did, even if it was correctly set, the chown from the mountpoint
> again, to no avail.
>
>
> On 04/02/2020 21:53, Christian Reiss wrote:
> >
> > ACL is correctly set:
> >
> > # file: 5aab365f-b1b9-49d0-b011-566bf936a100
> > # owner: vdsm
> > # group: kvm
> > user::rw-
> > group::rw-
> > other::---
> >
> > Doing a setfacl failed due to "Operation not supported", remounting with
> > acl, too:
> >
> > [root@node01 ~]# mount -o remount,acl
> > /rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net
> \:_ssd__storage/
> > /bin/sh: glusterfs: command not found
> >
> > As I am running the oVirt node I am not sure how feasable down/upgrading
> > is. I think I am stuck with what I have.
> >
> > Also, if this would be a permission issue, I would not be able to access
> > the file at all. Seems I can access some of it. And all of it when root
> > loaded the whole file first.
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2MOTXZU6AFAGV2GL6P5XBXHMCRFUM6F/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F35IM24FBO7S5VVP3X57X3TAVUFAKYMP/


[ovirt-users] Re: Using NFS on Gluster

2019-03-26 Thread Hesham Ahmed
Hi Indivar,

I did understand your question correctly, and I suggested the solution for
your use case. If you are starting a new oVirt setup, just configure
gluster nodes separately, create a volume for engine and share over nfs,
then install oVirt hosted engine on one of the nodes and use nfs as the
storage. Once engine is up add other oVirt nodes to engine and finally
create a new gluster only cluster and add gluster nodes to it to manage
gluster volumes.

Do note that none of the above would be considered recommend environment
for oVirt, a better solution would be to use gluster replica 3 arbiter 1
with arbiter on one of the ovirt nodes. You can configure different network
for gluster and oVirt management so that gluster uses 10g while oVirt uses
1g nic.

Regards,

HSA


On Tue, Mar 26, 2019, 7:17 PM Indivar Nair 
wrote:

> Hi Hesham,
>
> Sorry, I think I wasn't clear.
>
> We are not trying to set up HCI.
> *(We have two server hardware allocated as dedicated Gluster Nodes. And a
> few server hardware allocated as ovirt nodes.)*
>
> We want to the *ovirt nodes* to use the Gluster Storage as an *NFS
> Storage*.
>
> Regards,
>
>
> Indivar Nair
>
> On Tue, Mar 26, 2019 at 8:48 PM Hesham Ahmed  wrote:
>
>> Create a new Cluster in oVirt and disable it's "Virt Service" while
>> enabling the "Gluster Service". Then add the gluster nodes to this
>> Cluster and you will be able to manage them using oVirt without using
>> them for virtualization.
>>
>> On Tue, Mar 26, 2019 at 5:43 PM Indivar Nair 
>> wrote:
>> >
>> > Hi All,
>> >
>> > We would like to set up a 2 node mirrored Gluster Storage using the
>> ovirt installation tools, but use it as an NFS storage.
>> >
>> > We want to do this for 2 reasons -
>> >
>> > 1. We have a high-speed link between both the Gluster Nodes which we
>> would like to use for brick replication rather than ovirt node's 1G link
>> > 2. We have fast SSD disks on the ovirt nodes which we would like to use
>> for local NFS Caching (FS Caching) on the VM Images.
>> >
>> > In short, we would like to manage Gluster stroage using the ovirt web
>> interface, but access the Gluster storage as an NFS Server.
>> >
>> > How can we do this?
>> >
>> > Thanks.
>> >
>> > Regards,
>> >
>> >
>> > Indivar Nair
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BGJXGNB7Y6U76I43VZIAFAHZZ74KWRRQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMWKDVI46H4DZGFQRAUP2DB6WJQIXF47/


[ovirt-users] Re: Using NFS on Gluster

2019-03-26 Thread Hesham Ahmed
Create a new Cluster in oVirt and disable it's "Virt Service" while
enabling the "Gluster Service". Then add the gluster nodes to this
Cluster and you will be able to manage them using oVirt without using
them for virtualization.

On Tue, Mar 26, 2019 at 5:43 PM Indivar Nair  wrote:
>
> Hi All,
>
> We would like to set up a 2 node mirrored Gluster Storage using the ovirt 
> installation tools, but use it as an NFS storage.
>
> We want to do this for 2 reasons -
>
> 1. We have a high-speed link between both the Gluster Nodes which we would 
> like to use for brick replication rather than ovirt node's 1G link
> 2. We have fast SSD disks on the ovirt nodes which we would like to use for 
> local NFS Caching (FS Caching) on the VM Images.
>
> In short, we would like to manage Gluster stroage using the ovirt web 
> interface, but access the Gluster storage as an NFS Server.
>
> How can we do this?
>
> Thanks.
>
> Regards,
>
>
> Indivar Nair
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BGJXGNB7Y6U76I43VZIAFAHZZ74KWRRQ/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R6APR5PQAB42FYNQPED3RUNBLEB2ETTA/


[ovirt-users] Re: CLI command to export VMs

2019-03-25 Thread Hesham Ahmed
This can be executed anywhere, I run it either on the engine or host since
all prerequisites are pre-installed there. You need to enter the name of
the vm and host in place of 'myvm' and 'myhost'

On Mon, Mar 25, 2019, 4:45 PM Sakhi Hadebe  wrote:

> Thank you Hesham,
>
> Do execute the script on engine or ovirt hosts? Should I specify the
> domain of the VM and the name of the host on the values bolded below:
>
> # Find the virtual machine:
> vms_service = connection.system_service().vms_service()
> vm = vms_service.list(search='name=myvm')[0]
> vm_service = vms_service.vm_service(vm.id)
> # Find the host:
> hosts_service = connection.system_service().hosts_service()
> host = hosts_service.list(search='name=myhost')[0]
>
>
>
>
> On Mon, Mar 25, 2019 at 2:49 PM Hesham Ahmed  wrote:
>
>> I don't think there is a pre-installed CLI tool for export to OVA,
>> however you can use this
>>
>> https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py
>>
>> Make sure you change the Engine URL, username, password, VM and Host
>> values to match your requirements.
>>
>> On Mon, Mar 25, 2019 at 3:35 PM Sakhi Hadebe  wrote:
>> >
>> > Hi,
>> >
>> > What is the CLI command to export VMs as OVA?
>> >
>> > --
>> > Regards,
>> > Sakhi Hadebe
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I32KQWNTIY4N6ZXTH6IGND74H2JLWJJ5/
>>
>
>
> --
> Regards,
> Sakhi Hadebe
>
> Engineer: South African National Research Network (SANReN)Competency Area, 
> Meraka, CSIR
>
> Tel:   +27 12 841 2308 <+27128414213>
> Fax:   +27 12 841 4223 <+27128414223>
> Cell:  +27 71 331 9622 <+27823034657>
> Email: sa...@sanren.ac.za 
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFZ2NRXE36QU2QSJ4EHAJFSH4PXHOQP5/


[ovirt-users] Re: CLI command to export VMs

2019-03-25 Thread Hesham Ahmed
I don't think there is a pre-installed CLI tool for export to OVA,
however you can use this
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py

Make sure you change the Engine URL, username, password, VM and Host
values to match your requirements.

On Mon, Mar 25, 2019 at 3:35 PM Sakhi Hadebe  wrote:
>
> Hi,
>
> What is the CLI command to export VMs as OVA?
>
> --
> Regards,
> Sakhi Hadebe
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I32KQWNTIY4N6ZXTH6IGND74H2JLWJJ5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ITHP7WA273M456NZFUMVYO7Q6VPR5DZP/


[ovirt-users] Re: Bulk ISO Upload to Data Domain

2019-03-22 Thread Hesham Ahmed
We're using the upload_disk.py from oVirt sdk examples
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py
chmod +x upload_disk.py
Change the engine url/user/pass on line 108-110 copy ca.pem file from
engine /etc/pki/.../apache-ca.pem to working directory and finally set the
target Storage Domain on line 153 and run ./upload_disk.py 




On Fri, Mar 22, 2019, 6:18 AM  wrote:

> Is there a way to transfer and register multiple ISO files to a data
> domain (since ISO domains are deprecated)?  Single ISO transfers via the
> browser are a bit time-consuming, so I was wondering it would be possible
> to do otherwise.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OQCSLCGAYRR7N2ILE2SBTGJKTW3NCMAN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMW46JIIW3VPGW4DJYYG5XZ6L3OCRKZ4/


[ovirt-users] Re: Libgfapisupport messes disk image ownership

2019-03-16 Thread Hesham Ahmed
I have mentioned in the bug report that it seems related to that bug.
However, unlike that bug there is no workaround for libgfapi since the
permissions are lost on every restart of VM.

On Sat, Mar 16, 2019, 5:46 AM Darrell Budic  wrote:

> You may have this one instead. I just encountered it last night, still
> seems to be an issue.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>
> > On Mar 15, 2019, at 4:25 PM, Hesham Ahmed  wrote:
> >
> > I had reported this here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1687126
> >
> > Has anyone else faced this with 4.3.1?
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBIASF6YXLOHVKHYRSEFGSPBKH52OSYX/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PG7MPM3ZWASROD64OHT6CGJQREAYF6VP/


[ovirt-users] Libgfapisupport messes disk image ownership

2019-03-15 Thread Hesham Ahmed
I had reported this here: https://bugzilla.redhat.com/show_bug.cgi?id=1687126

Has anyone else faced this with 4.3.1?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBIASF6YXLOHVKHYRSEFGSPBKH52OSYX/


[ovirt-users] Re: "gluster-ansible-roles is not installed on Host" error on Cockpit

2019-03-10 Thread Hesham Ahmed
sac-gluster-ansible is there and is enabled:

[sac-gluster-ansible]
enabled=1
name = Copr repo for gluster-ansible owned by sac
baseurl = 
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/epel-7-$basearch/
type = rpm-md
skip_if_unavailable = False
gpgcheck = 1
gpgkey = 
https://copr-be.cloud.fedoraproject.org/results/sac/gluster-ansible/pubkey.gpg
repo_gpgcheck = 0
enabled = 1
enabled_metadata = 1
includepkgs = ovirt-node-ng-image-update ovirt-node-ng-image
ovirt-engine-appliance

The issue doesn't appear to be missing packages but package naming.
There is a package gluster-ansible installed which provides the
gluster ansible roles:

Installed Packages
Name: gluster-ansible
Arch: noarch
Version : 0.6
Release : 1
Size: 56 k
Repo: installed
Summary : Ansible roles for GlusterFS deployment and management
URL : https://github.com/gluster/gluster-ansible
License : GPLv3
Description : Collection of Ansible roles for the deploying and
managing GlusterFS clusters.

I can also confirm that just by changing the package name in the
app.js as I previously mentioned allows complete HCI deployment and
gluster volume creation from within Cockpit without any issues.


On Sun, Mar 10, 2019 at 2:55 PM Strahil  wrote:
>
> Check if you have a repo called sac-gluster-ansible.
>
> Best Regards,
> Strahil NikolovOn Mar 10, 2019 08:21, Hesham Ahmed  wrote:
> >
> > On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
> > (also when trying adding a new gluster volume to existing clusters)
> > using Cockpit, an error is displayed "gluster-ansible-roles is not
> > installed on Host. To continue deployment, please install
> > gluster-ansible-roles on Host and try again". There is no package
> > named gluster-ansible-roles in the repositories:
> >
> > [root@localhost ~]# yum install gluster-ansible-roles
> > Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
> > package_upload, product-id, search-disabled-repos,
> > subscription-manager, vdsmupgrade
> > This system is not registered with an entitlement server. You can use
> > subscription-manager to register.
> > Loading mirror speeds from cached hostfile
> > * ovirt-4.3-epel: mirror.horizon.vn
> > No package gluster-ansible-roles available.
> > Error: Nothing to do
> > Uploading Enabled Repositories Report
> > Cannot upload enabled repos report, is this client registered?
> >
> > This is due to check introduced here:
> > https://gerrit.ovirt.org/#/c/98023/1/dashboard/src/helpers/AnsibleUtil.js
> >
> > Changing the line from:
> > [ "rpm", "-qa", "gluster-ansible-roles" ], { "superuser":"require" }
> > to
> > [ "rpm", "-qa", "gluster-ansible" ], { "superuser":"require" }
> > resolves the issue. The above code snippet is installed at
> > /usr/share/cockpit/ovirt-dashboard/app.js on oVirt node and can be
> > patched by running "sed -i 's/gluster-ansible-roles/gluster-ansible/g'
> > /usr/share/cockpit/ovirt-dashboard/app.js && systemctl restart
> > cockpit"
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/243QJOXO2KTWYU5CDH3OC7WJ6Z2EL4CG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WCS3VLTIOW4FF2BXHQQ4AGGJBHXJYXLS/


[ovirt-users] "gluster-ansible-roles is not installed on Host" error on Cockpit

2019-03-09 Thread Hesham Ahmed
On a new 4.3.1 oVirt Node installation, when trying to deploy HCI
(also when trying adding a new gluster volume to existing clusters)
using Cockpit, an error is displayed "gluster-ansible-roles is not
installed on Host. To continue deployment, please install
gluster-ansible-roles on Host and try again". There is no package
named gluster-ansible-roles in the repositories:

[root@localhost ~]# yum install gluster-ansible-roles
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist,
package_upload, product-id, search-disabled-repos,
subscription-manager, vdsmupgrade
This system is not registered with an entitlement server. You can use
subscription-manager to register.
Loading mirror speeds from cached hostfile
 * ovirt-4.3-epel: mirror.horizon.vn
No package gluster-ansible-roles available.
Error: Nothing to do
Uploading Enabled Repositories Report
Cannot upload enabled repos report, is this client registered?

This is due to check introduced here:
https://gerrit.ovirt.org/#/c/98023/1/dashboard/src/helpers/AnsibleUtil.js

Changing the line from:
[ "rpm", "-qa", "gluster-ansible-roles" ], { "superuser":"require" }
to
[ "rpm", "-qa", "gluster-ansible" ], { "superuser":"require" }
resolves the issue. The above code snippet is installed at
/usr/share/cockpit/ovirt-dashboard/app.js on oVirt node and can be
patched by running "sed -i 's/gluster-ansible-roles/gluster-ansible/g'
/usr/share/cockpit/ovirt-dashboard/app.js && systemctl restart
cockpit"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/243QJOXO2KTWYU5CDH3OC7WJ6Z2EL4CG/


[ovirt-users] Re: No spice-html5 in console options

2018-10-11 Thread Hesham Ahmed
Spice HTML5 was removed many versions back. I believe it was no longer
being maintained and there wasn't much interest.
On Thu, Oct 11, 2018 at 11:15 AM  wrote:
>
> Hello!
> Strange, but i have no spice-html5 option in vm console settings.
> https://prnt.sc/l4qz00
> Should i add a spice proxy for this?
>
> Version 4.2.6.4-1.el7
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FZDUWVTHZUSKPJ5UX4H7YSIKEQ7R3LQA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EWEKKDD4HQ2IVXHGSSC6Z2CQMOQUCLDT/


[ovirt-users] Re: New oVirt deployment suggestions

2018-09-27 Thread Hesham Ahmed
Also I would not create a single bond for use with gluster and
management (even on different VLAN), gluster can saturate 4x1Gbe ports
easily, specially during healing and this would trigger fencing since
engine would start getting timeouts when trying to reach hosts. I
would suggest creating 2 bonded interfaces (2 x 1G each), with one
dedicated for gluster or get a dedicated 10G nic for gluster.
On Thu, Sep 27, 2018 at 4:00 PM Hesham Ahmed  wrote:
>
> You can install any CentOS compatible custom software on oVirt nodes
> without much trouble.
> On Thu, Sep 27, 2018 at 3:06 PM Stefano Danzi  wrote:
> >
> > Hi!
> >
> > I need to install HP SSA and HP SHM on hosts and I don't know if this is
> > supported on oVirt node.
> >
> > Il 27/09/2018 13:46, Hesham Ahmed ha scritto:
> > > Unless you have a reason to use CentOS, I suggest you use oVirt node,
> > > it is much more optimized out of the box for oVirt
> > >
> > >
> > > On Thu, Sep 27, 2018 at 2:25 PM Stefano Danzi  wrote:
> > >> Hello!
> > >>
> > >> I'm almost ready to start with a new oVirt deplyment. I will use CentOS
> > >> 7, self hosting engine and glusert storage.
> > >> I have 3 phisical host. Each host has four NIC. My first idea is:
> > >>
> > >> - configure bond betwheen NICs
> > >> - configure a VLAN interface for management network (and local lan)
> > >> - configure a VLAN interface for gluster network
> > >> - configure gluster for the hosted engine
> > >> - start "hosted-engine --deploy" process
> > >>
> > >> is this enough? do I need a phisical dedicated NIC for management 
> > >> network?
> > >>
> > >> Bye
> > >> ___
> > >> Users mailing list -- users@ovirt.org
> > >> To unsubscribe send an email to users-le...@ovirt.org
> > >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > >> oVirt Code of Conduct: 
> > >> https://www.ovirt.org/community/about/community-guidelines/
> > >> List Archives: 
> > >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRCM7TIHMRP3DL7RN32UQBBLMNVABVE5/
> >
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FX2V6GCVKIXMN776LEEGQ7O5SVOXRE3S/


[ovirt-users] Re: New oVirt deployment suggestions

2018-09-27 Thread Hesham Ahmed
You can install any CentOS compatible custom software on oVirt nodes
without much trouble.
On Thu, Sep 27, 2018 at 3:06 PM Stefano Danzi  wrote:
>
> Hi!
>
> I need to install HP SSA and HP SHM on hosts and I don't know if this is
> supported on oVirt node.
>
> Il 27/09/2018 13:46, Hesham Ahmed ha scritto:
> > Unless you have a reason to use CentOS, I suggest you use oVirt node,
> > it is much more optimized out of the box for oVirt
> >
> >
> > On Thu, Sep 27, 2018 at 2:25 PM Stefano Danzi  wrote:
> >> Hello!
> >>
> >> I'm almost ready to start with a new oVirt deplyment. I will use CentOS
> >> 7, self hosting engine and glusert storage.
> >> I have 3 phisical host. Each host has four NIC. My first idea is:
> >>
> >> - configure bond betwheen NICs
> >> - configure a VLAN interface for management network (and local lan)
> >> - configure a VLAN interface for gluster network
> >> - configure gluster for the hosted engine
> >> - start "hosted-engine --deploy" process
> >>
> >> is this enough? do I need a phisical dedicated NIC for management network?
> >>
> >> Bye
> >> ___
> >> Users mailing list -- users@ovirt.org
> >> To unsubscribe send an email to users-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct: 
> >> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives: 
> >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRCM7TIHMRP3DL7RN32UQBBLMNVABVE5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3QEAZZAO4ET4SVLELKNNTBUUV4MY3JKU/


[ovirt-users] Re: New oVirt deployment suggestions

2018-09-27 Thread Hesham Ahmed
Unless you have a reason to use CentOS, I suggest you use oVirt node,
it is much more optimized out of the box for oVirt


On Thu, Sep 27, 2018 at 2:25 PM Stefano Danzi  wrote:
>
> Hello!
>
> I'm almost ready to start with a new oVirt deplyment. I will use CentOS
> 7, self hosting engine and glusert storage.
> I have 3 phisical host. Each host has four NIC. My first idea is:
>
> - configure bond betwheen NICs
> - configure a VLAN interface for management network (and local lan)
> - configure a VLAN interface for gluster network
> - configure gluster for the hosted engine
> - start "hosted-engine --deploy" process
>
> is this enough? do I need a phisical dedicated NIC for management network?
>
> Bye
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRCM7TIHMRP3DL7RN32UQBBLMNVABVE5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GCCRCMO62ZHH5PDOA4Y5WRXJ6WIXG76A/


[ovirt-users] Re: Ovirt Single Node Hyperconverged

2018-09-11 Thread Hesham Ahmed
You would need three servers for gluster based hyperconverged oVirt
deployment.

On Tue, Sep 11, 2018, 8:41 PM Keith Winn  wrote:

> Cool, it is good to know that I was on the right track. Thanks Again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEZRO7TLMENLTJDZJEITDJXIADTXVPUQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZB2X2O6GAWCVPMEPXBQUN7DSQI3LDIT/


[ovirt-users] qemu-kvm memory leak in 4.2.5

2018-09-03 Thread Hesham Ahmed
Starting oVirt 4.2.4 (also in 4.2.5 and maybe in 4.2.3) I am facing
some sort of memory leak. The memory usage on the hosts keep
increasing till it reaches somewhere around 97%. Putting the host in
maintenance and back resolves it. The memory usage by the qemu-kvm
processes is way above the defined VM memory for instance below is the
memory usage of a VM:

   PID   USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+
COMMAND
 12271  qemu  20   0  35.4g   30.9g  8144 S  13.3 49.3
9433:15 qemu-kvm

The VM memory settings are:

Defined Memory: 8192 MB
Physical Memory Guaranteed: 5461 MB

This is for a 3 node hyperconverged cluster running on latest oVirt Node 4.2.5.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7W7S3V2GRBIJ4ID6EJVJWRCBM4DE42L/


[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-09 Thread Hesham Ahmed
On Mon, Jul 9, 2018 at 3:52 PM Sahina Bose  wrote:
>
>
>
> On Mon, Jul 9, 2018 at 5:41 PM, Hesham Ahmed  wrote:
>>
>> Thanks Sahina for the update,
>>
>> I am using gluster geo-replication for DR in a different installation,
>> however I was not aware that Gluster snapshots are not recommended in
>> a hyperconverged setup, don't. A warning on the Gluster snapshot UI
>> would be helpful. Is gluster volume snapshots for volumes hosting VM
>> images a work in progress with a bug tracker or it's something not
>> expected to change?
>
>
> Agreed on the warning - can you log a bz?
>
> There's no specific bz tracking support for volume snapshots w.r.t VM store 
> use case. If you have a specific scenario where the geo-rep based DR is not 
> sufficient, please log a bug.
>
>> On Mon, Jul 9, 2018 at 2:58 PM Sahina Bose  wrote:
>> >
>> >
>> >
>> > On Sun, Jul 8, 2018 at 3:29 PM, Hesham Ahmed  wrote:
>> >>
>> >> I also noticed that Gluster Snapshots have the SAME UUID as the main
>> >> LV and if using UUID in fstab, the snapshot device is sometimes
>> >> mounted instead of the primary LV
>> >>
>> >> For instance:
>> >> /etc/fstab contains the following line:
>> >>
>> >> UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
>> >> auto 
>> >> inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-hlpE-TteI-lMhvvt
>> >> 0 0
>> >>
>> >> # lvdisplay gluster00/lv01_data01
>> >>   --- Logical volume ---
>> >>   LV Path/dev/gluster00/lv01_data01
>> >>   LV Namelv01_data01
>> >>   VG Namegluster00
>> >>
>> >> # mount
>> >> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
>> >> /gluster_bricks/gv01_data01 type xfs
>> >> (rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)
>> >>
>> >> Notice above the device mounted at the brick mountpoint is not
>> >> /dev/gluster00/lv01_data01 and instead is one of the snapshot devices
>> >> of that LV
>> >>
>> >> # blkid
>> >> /dev/mapper/gluster00-lv01_shaker_com_sa:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >> /dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
>> >> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> >>
>> >> Notice the UUID of LV and its snapshots is the same causing systemd to
>> >> mount one of the snapshot devices instead of LV which results in the
>> >> following gluster error:
>> >>
>> >> gluster> volume start gv01_data01 force
>> >> volume start: gv01_data01: failed: Volume id mismatch for brick
>> >> vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
>> >> be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
>> >> 55e97e74-12bf-48db-99bb-389bb708edb8 found
>> >
>> >
>> >
>> > We do not recommend gluster volume snapshots for volumes hosting VM 
>> > images. Please look at the 
>> > https://ovirt.org/develop/release-management/features/gluster/gluster-dr/ 
>> > as an alternative.
>> >
>> >>
>> >> On Sun, Jul 8, 2018 at 12:32 PM  wrote:
>> >> >
>> >> > I am facing this trouble since version 4.1 up to the latest 4.2.4, once
>> >> > we enable gluster snapshots and accumulate some snapshots (as few as 15
>> >>

[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-09 Thread Hesham Ahmed
Thanks Sahina for the update,

I am using gluster geo-replication for DR in a different installation,
however I was not aware that Gluster snapshots are not recommended in
a hyperconverged setup, don't. A warning on the Gluster snapshot UI
would be helpful. Is gluster volume snapshots for volumes hosting VM
images a work in progress with a bug tracker or it's something not
expected to change?
On Mon, Jul 9, 2018 at 2:58 PM Sahina Bose  wrote:
>
>
>
> On Sun, Jul 8, 2018 at 3:29 PM, Hesham Ahmed  wrote:
>>
>> I also noticed that Gluster Snapshots have the SAME UUID as the main
>> LV and if using UUID in fstab, the snapshot device is sometimes
>> mounted instead of the primary LV
>>
>> For instance:
>> /etc/fstab contains the following line:
>>
>> UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
>> auto 
>> inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-hlpE-TteI-lMhvvt
>> 0 0
>>
>> # lvdisplay gluster00/lv01_data01
>>   --- Logical volume ---
>>   LV Path/dev/gluster00/lv01_data01
>>   LV Namelv01_data01
>>   VG Namegluster00
>>
>> # mount
>> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
>> /gluster_bricks/gv01_data01 type xfs
>> (rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)
>>
>> Notice above the device mounted at the brick mountpoint is not
>> /dev/gluster00/lv01_data01 and instead is one of the snapshot devices
>> of that LV
>>
>> # blkid
>> /dev/mapper/gluster00-lv01_shaker_com_sa:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>> /dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
>> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>>
>> Notice the UUID of LV and its snapshots is the same causing systemd to
>> mount one of the snapshot devices instead of LV which results in the
>> following gluster error:
>>
>> gluster> volume start gv01_data01 force
>> volume start: gv01_data01: failed: Volume id mismatch for brick
>> vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
>> be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
>> 55e97e74-12bf-48db-99bb-389bb708edb8 found
>
>
>
> We do not recommend gluster volume snapshots for volumes hosting VM images. 
> Please look at the 
> https://ovirt.org/develop/release-management/features/gluster/gluster-dr/ as 
> an alternative.
>
>>
>> On Sun, Jul 8, 2018 at 12:32 PM  wrote:
>> >
>> > I am facing this trouble since version 4.1 up to the latest 4.2.4, once
>> > we enable gluster snapshots and accumulate some snapshots (as few as 15
>> > snapshots per server) we start having trouble booting the server. The
>> > server enters emergency shell upon boot after timing out waiting for
>> > snapshot devices. Waiting a few minutes and pressing Control-D then boots 
>> > the server normally. In case of very large number of snapshots (600+) it 
>> > can take days before the sever will boot. Attaching journal
>> > log, let me know if you need any other logs.
>> >
>> > Details of the setup:
>> >
>> > 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
>> > oVirt 4.2.4
>> > oVirt Node 4.2.4
>> > 10Gb Interface
>> >
>> > Thanks,
>> >
>> > Hesham S. Ahmed
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X6TORNPB4XNEHGJJRZFYMOXENDQ2WT7S/


[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread Hesham Ahmed
Apparently this is known default behavior of LVM snapshots and in that
case maybe Cockpit in oVirt node should create mountpoints using
/dev/mapper path instead of UUID by default. The timeout issue
persists even after switching to /dev/mapper/devices in fstab
On Sun, Jul 8, 2018 at 12:59 PM Hesham Ahmed  wrote:
>
> I also noticed that Gluster Snapshots have the SAME UUID as the main
> LV and if using UUID in fstab, the snapshot device is sometimes
> mounted instead of the primary LV
>
> For instance:
> /etc/fstab contains the following line:
>
> UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
> auto 
> inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-hlpE-TteI-lMhvvt
> 0 0
>
> # lvdisplay gluster00/lv01_data01
>   --- Logical volume ---
>   LV Path/dev/gluster00/lv01_data01
>   LV Namelv01_data01
>   VG Namegluster00
>
> # mount
> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
> /gluster_bricks/gv01_data01 type xfs
> (rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)
>
> Notice above the device mounted at the brick mountpoint is not
> /dev/gluster00/lv01_data01 and instead is one of the snapshot devices
> of that LV
>
> # blkid
> /dev/mapper/gluster00-lv01_shaker_com_sa:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
> /dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
> UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
>
> Notice the UUID of LV and its snapshots is the same causing systemd to
> mount one of the snapshot devices instead of LV which results in the
> following gluster error:
>
> gluster> volume start gv01_data01 force
> volume start: gv01_data01: failed: Volume id mismatch for brick
> vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
> be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
> 55e97e74-12bf-48db-99bb-389bb708edb8 found
>
> On Sun, Jul 8, 2018 at 12:32 PM  wrote:
> >
> > I am facing this trouble since version 4.1 up to the latest 4.2.4, once
> > we enable gluster snapshots and accumulate some snapshots (as few as 15
> > snapshots per server) we start having trouble booting the server. The
> > server enters emergency shell upon boot after timing out waiting for
> > snapshot devices. Waiting a few minutes and pressing Control-D then boots 
> > the server normally. In case of very large number of snapshots (600+) it 
> > can take days before the sever will boot. Attaching journal
> > log, let me know if you need any other logs.
> >
> > Details of the setup:
> >
> > 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
> > oVirt 4.2.4
> > oVirt Node 4.2.4
> > 10Gb Interface
> >
> > Thanks,
> >
> > Hesham S. Ahmed
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZOHBIIIKKVZDWGWRXKZ5GEZOADLCSGJB/


[ovirt-users] Re: Device Mapper Timeout when using Gluster Snapshots

2018-07-08 Thread Hesham Ahmed
I also noticed that Gluster Snapshots have the SAME UUID as the main
LV and if using UUID in fstab, the snapshot device is sometimes
mounted instead of the primary LV

For instance:
/etc/fstab contains the following line:

UUID=a0b85d33-7150-448a-9a70-6391750b90ad /gluster_bricks/gv01_data01
auto inode64,noatime,nodiratime,x-parent=dMeNGb-34lY-wFVL-WF42-hlpE-TteI-lMhvvt
0 0

# lvdisplay gluster00/lv01_data01
  --- Logical volume ---
  LV Path/dev/gluster00/lv01_data01
  LV Namelv01_data01
  VG Namegluster00

# mount
/dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0 on
/gluster_bricks/gv01_data01 type xfs
(rw,noatime,nodiratime,seclabel,attr2,inode64,sunit=1024,swidth=2048,noquota)

Notice above the device mounted at the brick mountpoint is not
/dev/gluster00/lv01_data01 and instead is one of the snapshot devices
of that LV

# blkid
/dev/mapper/gluster00-lv01_shaker_com_sa:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-55e97e7412bf48db99bb389bb708edb8_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-4ca8eef409ec4932828279efb91339de_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-59992b6c14644f13b5531a054d2aa75c_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-362b50c994b04284b1664b2e2eb49d09_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-0b3cc414f4cb4cddb6e81f162cdb7efe_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-da98ce5efda549039cf45a18e4eacbaf_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"
/dev/mapper/gluster00-4ea5cce4be704dd7b29986ae6698a666_0:
UUID="a0b85d33-7150-448a-9a70-6391750b90ad" TYPE="xfs"

Notice the UUID of LV and its snapshots is the same causing systemd to
mount one of the snapshot devices instead of LV which results in the
following gluster error:

gluster> volume start gv01_data01 force
volume start: gv01_data01: failed: Volume id mismatch for brick
vhost03:/gluster_bricks/gv01_data01/gv. Expected volume id
be6bc69b-c6ed-4329-b300-3b9044f375e1, volume id
55e97e74-12bf-48db-99bb-389bb708edb8 found

On Sun, Jul 8, 2018 at 12:32 PM  wrote:
>
> I am facing this trouble since version 4.1 up to the latest 4.2.4, once
> we enable gluster snapshots and accumulate some snapshots (as few as 15
> snapshots per server) we start having trouble booting the server. The
> server enters emergency shell upon boot after timing out waiting for
> snapshot devices. Waiting a few minutes and pressing Control-D then boots the 
> server normally. In case of very large number of snapshots (600+) it can take 
> days before the sever will boot. Attaching journal
> log, let me know if you need any other logs.
>
> Details of the setup:
>
> 3 node hyperconverged oVirt setup (64GB RAM, 8-Core E5 Xeon)
> oVirt 4.2.4
> oVirt Node 4.2.4
> 10Gb Interface
>
> Thanks,
>
> Hesham S. Ahmed
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4K7KRQNLSC24XGGNJDOH5LIVLXKRLRPA/


[ovirt-users] Re: hyperconverged cluster - how to change the mount path?

2018-07-04 Thread Hesham Ahmed
The correct way to allow hosted engine to use other available gluster
peers in case of failure of the specified peer is to pass the
–config-append option during setup as described
https://ovirt.org/develop/release-management/features/sla/self-hosted-engine-gluster-support/

If you want to change that now, then just edit the file
/etc/ovirt-hosted-engine/hosted-engine.conf on all hosts and add the
following line:

mnt_options=backup-volfile-servers=host02:host03

However note that for any new hosted engine deployments, the file has
to be manually edited.
On Wed, Jul 4, 2018 at 4:51 PM Liebe, André-Sebastian
 wrote:
>
> Well, I though this wouldn’t be supported at the moment.
>
> But I seriously doubt changing this in engine’s SQL database will be a good 
> idea at all, since the way hosted-engine’s configuration is shared across 
> each node (/etc/ovirt-hosted-engine/hosted-engine.conf, 
> /var/lib/ovirt-hsoted-engine-ha, configuration inside hosted-engine’s storage 
> domain: hosted-engine.metadata)
>
>
>
> I had hoped hosted-engine or hosted-engine-setup cli could be extended to 
> support this use case. I’m willing to help out testing any (manual) procedure 
> to get this implemented (e.g. undeploy hosted-engine on two of three nodes, 
> make changes to hosted-engine.metadata, hosted-engine.conf, start/stop 
> hosted-engine, run SQL commands inside engine)
>
>
>
> Sincerely
>
> André
>
>
>
> Von: Renout Gerrits [mailto:m...@renout.nl]
> Gesendet: Mittwoch, 4. Juli 2018 12:04
> An: Liebe, André-Sebastian
> Cc: Gobinda Das; Alex K; users
> Betreff: Re: [ovirt-users] Re: hyperconverged cluster - how to change the 
> mount path?
>
>
>
> unsupported, make backups, use at your own risk etc...
>
>
>
> you could update the db if you can't put the storage domain into maintenance
>
> after that put your hosts into maintenance and out again to remount
>
>
>
> find the id of the sd you want to update with:
>
>   engine=# select * from storage_server_connections;
>
>
>
> ensure you have to correct id, the following should point to the old mount 
> point:
>
>   engine=# select connection from storage_server_connections where id=' from output above>';
>
>
>
> next update your db
>
>   engine=# update storage_server_connections set connection=' point' where id='';
>
>
>
>
>
>
>
> On Wed, Jul 4, 2018 at 9:13 AM, Liebe, André-Sebastian 
>  wrote:
>
> Yeah, sorry that doesn’t work.
>
> I can’t set hosted_storage (storage domain where hosted engine runs on) into 
> maintenance mode to being able to edit it.
>
>
>
> André
>
>
>
> Von: Gobinda Das [mailto:go...@redhat.com]
> Gesendet: Montag, 2. Juli 2018 09:00
> An: Alex K
> Cc: Liebe, André-Sebastian; users
> Betreff: Re: [ovirt-users] Re: hyperconverged cluster - how to change the 
> mount path?
>
>
>
> You can do it by using "Manage Domain" option from Starage Domain.
>
>
>
> On Sun, Jul 1, 2018 at 7:02 PM, Alex K  wrote:
>
> The steps roughly would be to put that storage domain in maintenance then 
> edit/redefine it. You have the option to set gluster mount point options for 
> the redundancy part. No need to set dns round robin.
>
>
>
> Alex
>
>
>
> On Sun, Jul 1, 2018, 13:29 Liebe, André-Sebastian  
> wrote:
>
> Hi list,
>
> I'm looking for an advice how to change the mount point of the hosted_storage 
> due to a hostname change.
>
> When I set up our hyperconverged lab cluster (host1, host2, host3) I 
> populated the mount path with host3:/hosted_storage which wasn't very clever 
> as it brings in a single point of failure (i.e. when host3 is down).
> So I thought adding a round robin dns/hosts entry (i.e. gluster1) for host 1 
> to 3 and changing the mount path would be a better idea. But the mount path 
> entry is locked in web gui and I couldn't find any hint how to change it 
> manually (in database, shared and local configuration) in a consistent way 
> without risking the cluster.
> So, is there a step by step guide how to achieve this without reinstalling 
> (from backup)?
>
>
> Sincerely
>
> André-Sebastian Liebe
> Technik / Innovation
>
> gematik
> Gesellschaft für Telematikanwendungen der Gesundheitskarte mbH
> Friedrichstraße 136
> 10117 Berlin
> Telefon: +49 30 40041-197
> Telefax: +49 30 40041-111
> E-Mail:  andre.li...@gematik.de
> www.gematik.de
> ___
> Amtsgericht Berlin-Charlottenburg HRB 96351 B
> Geschäftsführer: Alexander Beyer
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2R6G3VCK545RKT5BMAQ5EXO4ZFJSMFG/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: 

[ovirt-users] Re: Internal Server Error 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'

2018-06-26 Thread Hesham Ahmed
With upgrade to oVirt 4.2.4 (both engine and nodes) the error is
replaced with the following similar error:
Jun 26 16:16:28 vhost03.somedomain.com vdsm[6465]: ERROR Internal server error
   Traceback (most
recent call last):
 File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
_handle_request
   res = method(**params)
 File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
_dynamicMethod
   result = fn(*methodArgs)
 File
"/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line
91, in vdoVolumeList
   return
self._gluster.vdoVolumeList()
 File
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
wrapper
   rv =
func(*args, **kwargs)
 File
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 818, in
vdoVolumeList
   status =
self.svdsmProxy.glusterVdoVolumeList()
 File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55,
in __call__
   return callMethod()
 File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 53,
in 
   **kwargs)
 File "",
line 2, in glusterVdoVolumeList
 File
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
   raise
convert_to_error(kind, result)
   OSError: [Errno 2]
No such file or directory: vdo

On Mon, Jun 25, 2018 at 6:09 AM Hesham Ahmed  wrote:
>
> I am receiving the following error in journal repeatedly every few minutes on 
> all 3 nodes of a hyperconverged oVirt 4.2.3 setup running oVirt Nodes:
>
> Jun 25 06:03:26 vhost01.somedomain.com vdsm[45222]: ERROR Internal server 
> error
> Traceback (most recent 
> call last):
>   File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in 
> _handle_request
> res = method(**params)
>   File 
> "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in 
> _dynamicMethod
> result = 
> fn(*methodArgs)
>   File 
> "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 85, in 
> logicalVolumeList
> return 
> self._gluster.logicalVolumeList()
>   File 
> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in wrapper
> rv = func(*args, 
> **kwargs)
>   File 
> "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 808, in 
> logicalVolumeList
> status = 
> self.svdsmProxy.glusterLogicalVolumeList()
>   File 
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in 
> __call__
> return callMethod()
>   File 
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 52, in 
> 
> 
> getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
> AttributeError: 
> 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'
>
> And in /var/log/vdsm/vdsm.log
>
> 2018-06-25 06:03:24,118+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC 
> call Host.getCapabilities succeeded in 0.79 seconds (__init__:573)
> 2018-

[ovirt-users] Internal Server Error 'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'

2018-06-24 Thread Hesham Ahmed
I am receiving the following error in journal repeatedly every few minutes
on all 3 nodes of a hyperconverged oVirt 4.2.3 setup running oVirt Nodes:

Jun 25 06:03:26 vhost01.somedomain.com vdsm[45222]: ERROR Internal server
error
Traceback (most recent
call last):
  File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
_handle_request
res =
method(**params)
  File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
_dynamicMethod
result =
fn(*methodArgs)
  File
"/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line 85, in
logicalVolumeList
return
self._gluster.logicalVolumeList()
  File
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in wrapper
rv = func(*args,
**kwargs)
  File
"/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 808, in
logicalVolumeList
status =
self.svdsmProxy.glusterLogicalVolumeList()
  File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 55, in
__call__
return callMethod()
  File
"/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 52, in


getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError:
'AutoProxy[instance]' object has no attribute 'glusterLogicalVolumeList'

And in /var/log/vdsm/vdsm.log

2018-06-25 06:03:24,118+0300 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.getCapabilities succeeded in 0.79 seconds (__init__:573)
2018-06-25 06:03:26,106+0300 ERROR (jsonrpc/0) [jsonrpc.JsonRpcServer]
Internal server error (__init__:611)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606,
in _handle_request
res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 197, in
_dynamicMethod
result = fn(*methodArgs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/apiwrapper.py", line
85, in logicalVolumeList
return self._gluster.logicalVolumeList()
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 90, in
wrapper
rv = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/gluster/api.py", line 808, in
logicalVolumeList
status = self.svdsmProxy.glusterLogicalVolumeList()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
55, in __call__
return callMethod()
  File "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
52, in 
getattr(self._supervdsmProxy._svdsm, self._funcName)(*args,
AttributeError: 'AutoProxy[instance]' object has no attribute
'glusterLogicalVolumeList'
2018-06-25 06:03:26,107+0300 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call GlusterHost.logicalVolumeList failed (error -32603) in 0.00 seconds
(__init__:573)

There are no apparent affects of this on the operation of the servers
though. Any idea why these are there?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3HLX53X3PV5IHFTWMZZUMJMJPXW3JINQ/


Re: [ovirt-users] Gluster Snapshot Schedule Failing on 4.2.1

2018-03-08 Thread Hesham Ahmed
Log file attached to the bug. Do let me know if you need anything else.

On Thu, Mar 8, 2018, 4:32 PM Sahina Bose <sab...@redhat.com> wrote:

> Thanks for your report, we will take a look. Could you attach the
> engine.log to the bug?
>
> On Wed, Mar 7, 2018 at 11:20 PM, Hesham Ahmed <hsah...@gmail.com> wrote:
>
>> I am having issues with the Gluster Snapshot UI since upgrade to 4.2 and
>> now with 4.2.1. The UI doesn't appear as I explained in the bug report:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1530186
>>
>> I can now see the UI when I clear the cookies and try the snapshots UI
>> from within the volume details screen, however scheduled snapshots are not
>> being created. The engine log shows a single error:
>>
>> 2018-03-07 20:00:00,051+03 ERROR
>> [org.ovirt.engine.core.utils.timer.JobWrapper] (QuartzOvirtDBScheduler1)
>> [12237b15] Failed to invoke scheduled method onTimer: null
>>
>> Anyone scheduling snapshots successfully wtih 4.2?
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster Snapshot Schedule Failing on 4.2.1

2018-03-07 Thread Hesham Ahmed
I am having issues with the Gluster Snapshot UI since upgrade to 4.2 and
now with 4.2.1. The UI doesn't appear as I explained in the bug report:
https://bugzilla.redhat.com/show_bug.cgi?id=1530186

I can now see the UI when I clear the cookies and try the snapshots UI from
within the volume details screen, however scheduled snapshots are not being
created. The engine log shows a single error:

2018-03-07 20:00:00,051+03 ERROR
[org.ovirt.engine.core.utils.timer.JobWrapper] (QuartzOvirtDBScheduler1)
[12237b15] Failed to invoke scheduled method onTimer: null

Anyone scheduling snapshots successfully wtih 4.2?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users