[ovirt-users] email

2019-09-11 Thread Mustafa Taha
mustafa.t...@exalt.ps
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/27RWRVHPNWWUOLG2AYIQWPP36JYRRSOS/


[ovirt-users] Re: Gluster: Bricks remove failed

2019-09-11 Thread Sahina Bose
Can you provide the output of gluster volume info before the remove-brick
was done. It's not clear if you were reducing the replica count or removing
a replica subvolume.


On Wed, Sep 11, 2019 at 4:23 PM  wrote:

> Hi.
> There is an ovirt-hosted-engine on gluster volume engine
> Replicate replica count 3
>
> Migrate to other drives.
> I do:
> gluster volume add-brick engine clemens:/gluster-bricks/engine
> tiberius:/gluster-bricks/engine octavius:/gluster-bricks/engine force
> volume add-brick: success
>
> gluster volume remove-brick engine tiberius:/engine/datastore
> clemens:/engine/datastore octavius:/engine/datastore start
> volume remove-brick start: success
> ID: dd9453d3-b688-4ed8-ad37-ba901615046c
>
> gluster volume remove-brick engine octavius:/engine/datastore status
>  Node Rebalanced-files  size   scanned  failures
>  skipped   status  run time in h:m:s
> -  ---   ---   ---   ---
>  ---  --
> localhost750.0GB34 1
>0completed0:00:02
> clemens   1121.0MB31 0
>  0completed0:00:02
>  tiberius   1225.0MB36 0
>0completed0:00:02
>
> gluster volume status engine
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> --
> Brick octavius:/engine/datastore49156 0  Y
>  15669
> Brick tiberius:/engine/datastore49156 0  Y
>  15930
> Brick clemens:/engine/datastore 49156 0  Y
>  16193
> Brick clemens:/gluster-bricks/engine49159 0  Y
>  6168
> Brick tiberius:/gluster-bricks/engine   49163 0  Y
>  29524
> Brick octavius:/gluster-bricks/engine   49159 0  Y
>  50056
> Self-heal Daemon on localhost   N/A   N/AY
>  50087
> Self-heal Daemon on clemens N/A   N/AY
>  6263
> Self-heal Daemon on tiberiusN/A   N/AY
>  29583
>
> Task Status of Volume engine
>
> --
> Task : Remove brick
> ID   : dd9453d3-b688-4ed8-ad37-ba901615046c
> Removed bricks:
> tiberius:/engine/datastore
> clemens:/engine/datastore
> octavius:/engine/datastore
> Status   : completed
>
>
> But the data did not migrate
>
> du -hs /gluster-bricks/engine/ /engine/datastore/
> 49M /gluster-bricks/engine/
> 20G /engine/datastore/
>
> Can you give some advice?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2KHWTOXBWO6FQKKTM36OIEMN2B6GYQK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WD5AMFARITLPV3ZU2TSVKIH7A3OIP5EK/


[ovirt-users] Re: Health Warning: Cinnamon toxic when adding EL7 nodes as oVirt hosts

2019-09-11 Thread thomas
Hi Didi, I finally figured it out:

Cinnamon pulls python3 as a dependency.

The otopi script from the ovirt-host-deploy.tar that gets transferred from the 
management engine to the host-to-add prefers python3 using this logic in 
otopi-functions.py:

get_otopi_python() {
for p in "python3" "python"
do
pyexec=$(find_otopi ${p})
if [ -n "${pyexec}" ]; then
echo "${pyexec}"
break
fi
done
}

but on CentOS 7 there are no Python3 site-packages where CentOS 8 will be all 
dnf and python3 I guess.

That means rpmUtils cannot be imported and causes the initialization of miniyum 
to fail.

With Python3 becoming standard in CentOS 7.7 this could be come a more frequent 
issue.

That was hard to track down, but yes, in the end the source code helped and 
then the fact that the otopi scipt could be executed stand-alone.

I will file a bug report and perhaps the documentation could be updated to 
include a Python3 warning.

Thanks for your support,

Thomas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2QUBDSFTWNL27NAEAWYCX3A3Y4MOM32Y/


[ovirt-users] Deploying the Self-Hosted Engine Loop

2019-09-11 Thread mustafa . taha . mu95
Hi
i am trying to  deploy  self-hot with latest ovirt 4.3 on Centos 7.6.1810
but  deploying -first stage of deploy- stick on following logs: 

[ INFO  ] TASK [ovirt.hosted_engine_setup : Set new IPv4 subnet prefix]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Search again with another prefix]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Define 3rd chunk]
[ INFO  ] skipping: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Set 3rd chunk]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Get ip route]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Fail if can't find an available 
subnet]
[ INFO  ] skipping: [localhost]

can anybody help ? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQF23GR6TYKSD5L2S53RRR7JVVUQIFJB/


[ovirt-users] Re: Does cluster upgrade wait for heal before proceeding to next host?

2019-09-11 Thread Jayme
This sounds similar to the issue I hit with the cluster upgrade process in
my environment. I have large 2tb ssds and most of my vms are several
hundred Gbs in size. The heal process after host reboot can take 5-10
minutes to complete. I may be able to address this with better gluster
tuning.

Either way the upgrade process should be aware of the heal status and wait
for it to complete before attempting to move on to the next host.


On Wed, Sep 11, 2019 at 3:53 AM Sahina Bose  wrote:

>
>
> On Fri, Aug 9, 2019 at 3:41 PM Martin Perina  wrote:
>
>>
>>
>> On Thu, Aug 8, 2019 at 10:25 AM Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno mar 6 ago 2019 alle ore 23:17 Jayme  ha
>>> scritto:
>>>
 I’m aware of the heal process but it’s unclear to me if the update
 continues to run while the volumes are healing and resumes when they are
 done. There doesn’t seem to be any indication in the ui (unless I’m
 mistaken)

>>>
>>> Adding @Martin Perina  , @Sahina Bose
>>>and @Laura Wright   on this,
>>> hyperconverged deployments using cluster upgrade command would probably
>>> need some improvement.
>>>
>>
>> The cluster upgrade process continues to the 2nd host after the 1st host
>> becomes Up. If 2nd host then fails to switch to maintenance, we stop the
>> upgrade process to prevent breakage.
>> Sahina, is gluster healing process status exposed in RESTAPI? If so, does
>> it makes sense to wait for healing to be finished before trying to move
>> next host to maintenance? Or any other ideas how to improve?
>>
>
> I need to cross-check this, if we expose the heal count in the gluster
> bricks. Moving a host to maintenance does check if there are pending heal
> entries or possibility of quorum loss. And this would prevent the
> additional hosts to upgrade.
> +Gobinda Das  +Sachidananda URS 
>
>
>>>
>>>

 On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane  wrote:

> Hello,
>
> Often(?), updates to a hypervisor that also has (provides) a Gluster
> brick takes the hypervisor offline (updates often require a reboot).
>
> This reboot then makes the brick "out of sync" and it has to be
> resync'd.
>
> I find it a "feature" than another host that is also part of a gluster
> domain can not be updated (rebooted) before all the bricks are updated
> in order to guarantee there is not data loss. It is called Quorum, or?
>
> Always let the heal process end. Then the next update can start.
> For me there is ALWAYS a healing time before Gluster is happy again.
>
> Cheers,
>
> Robert O'Kane
>
>
> Am 06.08.2019 um 16:38 schrieb Shani Leviim:
> > Hi Jayme,
> > I can't recall such a healing time.
> > Can you please retry and attach the engine & vdsm logs so we'll be
> smarter?
> >
> > *Regards,
> > *
> > *Shani Leviim
> > *
> >
> >
> > On Tue, Aug 6, 2019 at 5:24 PM Jayme  > > wrote:
> >
> > I've yet to have cluster upgrade finish updating my three host
> HCI
> > cluster.  The most recent try was today moving from oVirt 4.3.3
> to
> > 4.3.5.5.  The first host updates normally, but when it moves on
> to
> > the second host it fails to put it in maintenance and the cluster
> > upgrade stops.
> >
> > I suspect this is due to that fact that after my hosts are
> updated
> > it takes 10 minutes or more for all volumes to sync/heal.  I have
> > 2Tb SSDs.
> >
> > Does the cluster upgrade process take heal time in to account
> before
> > attempting to place the next host in maintenance to upgrade it?
> Or
> > is there something else that may be at fault here, or perhaps a
> > reason why the heal process takes 10 minutes after reboot to
> complete?
> > ___
> > Users mailing list -- users@ovirt.org 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XM3QB3364ZYIPAKY4KTTOSJZMCWHUPD/
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBX3L23MWGMTF7Q4KGVR63RIQZFYXGWK/
> >
>
> --
> Systems Administrator
> 

[ovirt-users] Re: Locked Snapshot

2019-09-11 Thread Jayme
On the hosted engine there is a script for clearing locks that I’ve used
before to repair similar issues. I think this may have been the info I
used:
https://lists.ovirt.org/pipermail/users/2016-May/039618.html

On Wed, Sep 11, 2019 at 8:08 AM Peter Smith 
wrote:

> Hi Eyal,
>
>
>
> In the mean time is there anything that I can do to release the lock and
> remove the snapshot, or will it need to wait for the next release?
>
>
>
> Now that I have had approval to send log files, they are attached,
> unmodified, vm with the issue is VM-APTREPO-001.
>
>
>
> Regards,
>
>
>
> Peter
>
>
>
>
>
>
>
> *From:* Eyal Shenitzky 
> *Sent:* 11 September 2019 11:30
> *To:* Peter Smith 
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Locked Snapshot
>
>
>
> Hi Peter,
>
>
>
> I think that you hit -
>
> teardownImage attempts to deactivate in-use LV's rendering the VM disk
> image/volumes in locked state -
> https://bugzilla.redhat.com/show_bug.cgi?id=1749944
>
>
>
> A fix should be published soon.
>
>
>
> On Wed, 11 Sep 2019 at 12:34, Peter Smith 
> wrote:
>
> Hi,
>
>
>
> I have been trying to get a backup solution working for my VM’s, and have
> somehow managed to break the snapshot on one of my VM’s. I’m still fairly
> new to oVirt, we’re migrating from VMware. I have had this cluster running
> since May.
>
>
>
> The cluster:
>
> We are running oVirt on CentOS 7, with Hosted Engine.
>
> oVirt 4.3.5.4-1.el7 (according to the Help->About in the Admin Console)
>
> Hardware is 5 x Dell M620 Blades (300Gb internal storage for host OS) with
> ISCSI storage for the VM’s (Dell M3660i), 10GB Network throughout.
>
>
>
> The VM in question is allocated 50Gb  of storage and there is 461GB free
> in the storage domain.
>
>
>
> So I’ve been playing with bareos 18.2.5 with the ovirt plugin found here
> https://github.com/eurotux/bareos-contrib/tree/master/fd-plugins/ovirt-plugin
>
>
>
> I got bareos hooked up to ovirt. It created a snapshot, downloaded it,
> then at the end it failed to remove the snapshot. I though it may have been
> because I was looking round bareos while it was doing the backup. Went into
> oVirt deleted the snapshot no issues. Re ran the backup job in bareos,
> leaving bareos alone this time. The backup job failed again being unable to
> remove the snapshot. So I figured I’d go in and delete it, but I get an
> error saying “Cannot remove Snapshot: The following disks are locked:
> VM-NAME_vmdisk1. Please try again in a few minutes.”. In the Admin console
> if I expand the disks on the snapshot it shows that the image is locked.
> And before you ask the bareos user is setup as a SuperUser. If I go to the
> Events from the menu on the left of the admin console I can see that (Sorry
> Names/IP’s changed to protect the guilty):
>
>
>
> Sep 10, 2019, 10:42:39 AM Image Download with disk VM-NAME_vmdisk1
> succeeded.
>
> Sep 10, 2019, 10:42:38 AM VDSM hyper2 command TeardownImageVDS failed:
> Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'
> Logical volume
> 46a9dfd4-4871-46b2-a740-8fe430c70b4b/bbe315b9-e797-4cf8-b18b-6afeb33589cc
> in use.\', \'  Logical volume
> 46a9dfd4-4871-46b2-a740-8fe430c70b4b/d3dfa2c2-45fa-4081-98c5-890dd7a5990a
> in use.\']
> \\n46a9dfd4-4871-46b2-a740-8fe430c70b4b/[\'bbe315b9-e797-4cf8-b18b-6afeb33589cc\',
> \'d3dfa2c2-45fa-4081-98c5-890dd7a5990a\']",)',)
>
> Sep 10, 2019, 10:22:09 AM Image Download with disk VM-NAME_vmdisk1 was
> initiated by bareos@internal-authz.
>
> Sep 10, 2019, 10:21:55 AM Snapshot
> 'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' creation for VM
> 'VM-NAME' has been completed.
>
> Sep 10, 2019, 10:21:36 AM Snapshot
> 'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' creation for VM
> 'VM-NAME' was initiated by bareos@internal-authz.
>
> Sep 10, 2019, 10:21:36 AM Backup of virtual machine 'VM-NAME using
> snapshot 'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' is starting.
>
> Sep 10, 2019, 10:21:36 AM User bareos@internal-authz connecting from
> '10.***' using session '' logged in.
>
>
>
>
>
> To me it looks like bareos tried to delete the snapshot whilst it was
> still downloading it, which failed as expected.
>
>
>
> Does anyone know how to fix/recover from this?
>
>
>
> If you need log files, please could you tell me which ones?
>
>
>
> Regards,
>
>
>
> Peter Smith
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L5ZW7CHMFQGKKJRP3Q3UQIOZOTZKIXPI/
>
>
>
>
> --
>
> Regards,
>
> Eyal Shenitzky
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: 

[ovirt-users] Gluster: Bricks remove failed

2019-09-11 Thread toslavik
Hi.
There is an ovirt-hosted-engine on gluster volume engine 
Replicate replica count 3

Migrate to other drives.
I do:
gluster volume add-brick engine clemens:/gluster-bricks/engine 
tiberius:/gluster-bricks/engine octavius:/gluster-bricks/engine force
volume add-brick: success

gluster volume remove-brick engine tiberius:/engine/datastore 
clemens:/engine/datastore octavius:/engine/datastore start
volume remove-brick start: success
ID: dd9453d3-b688-4ed8-ad37-ba901615046c

gluster volume remove-brick engine octavius:/engine/datastore status
 Node Rebalanced-files  size   scanned  failures   
skipped   status  run time in h:m:s
-  ---   ---   ---   ---   
---  --
localhost750.0GB34 1
 0completed0:00:02
clemens   1121.0MB31 0 
0completed0:00:02
 tiberius   1225.0MB36 0
 0completed0:00:02

gluster volume status engine
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick octavius:/engine/datastore49156 0  Y   15669
Brick tiberius:/engine/datastore49156 0  Y   15930
Brick clemens:/engine/datastore 49156 0  Y   16193
Brick clemens:/gluster-bricks/engine49159 0  Y   6168 
Brick tiberius:/gluster-bricks/engine   49163 0  Y   29524
Brick octavius:/gluster-bricks/engine   49159 0  Y   50056
Self-heal Daemon on localhost   N/A   N/AY   50087
Self-heal Daemon on clemens N/A   N/AY   6263 
Self-heal Daemon on tiberiusN/A   N/AY   29583
 
Task Status of Volume engine
--
Task : Remove brick
ID   : dd9453d3-b688-4ed8-ad37-ba901615046c
Removed bricks: 
tiberius:/engine/datastore
clemens:/engine/datastore
octavius:/engine/datastore
Status   : completed  


But the data did not migrate

du -hs /gluster-bricks/engine/ /engine/datastore/
49M /gluster-bricks/engine/
20G /engine/datastore/

Can you give some advice?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2KHWTOXBWO6FQKKTM36OIEMN2B6GYQK/


[ovirt-users] Re: How to change Master Data Storage Domain

2019-09-11 Thread Mark Steele
Success - thank you once again to the group.

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue


On Wed, Sep 11, 2019 at 6:28 AM Mark Steele  wrote:

> OK - so if I understand correctly - now that everything is moved off of
> the Master, I can put it into maintenance mode and this will cause a new
> Master election - am I prompted for the new master or does it select
> another available data domain?
>
> Thanks
>
> ***
> *Mark Steele*
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
> twitter: http://twitter.com/telvue | facebook:
> https://www.facebook.com/telvue
>
>
> On Wed, Sep 11, 2019 at 5:19 AM Benny Zlotnik  wrote:
>
>> You don't need to remove it, the vm data will be available on the
>> OVF_STORE disks on the other SD, where you copied the disks to.
>> Once you put the domain in maintenance and new master SD will be elected.
>>
>> On Wed, Sep 11, 2019 at 12:13 PM Mark Steele  wrote:
>>
>>> Good morning,
>>>
>>> I have a Storage Domain that I would like to retire and move that role
>>> to a new storage domain. Both Domains exist on my current Data Center and I
>>> have moved all disks from the existing Data (Master) domain to the new Data
>>> Domain.
>>>
>>> The only thing that is still associated with the Data (Master) domain
>>> are two OVF_STORE items:
>>>
>>> Alias
>>> Virtual Size
>>> Actual Size
>>> Allocation Policy
>>> Storage Domain
>>> Storage Type
>>> Creation Date
>>> Attached To
>>> Alignment
>>> Status
>>> Description
>>> OVF_STORE
>>> < 1 GB
>>> < 1 GB
>>> Preallocated
>>> phl-datastore
>>> NFS
>>> 2014-Nov-14, 20:00
>>> Unknown
>>> OK
>>> OVF_STORE
>>> OVF_STORE
>>> < 1 GB
>>> < 1 GB
>>> Preallocated
>>> phl-datastore
>>> NFS
>>> 2014-Nov-14, 20:00
>>> Unknown
>>> OK
>>> OVF_STORE
>>>
>>> What is the procedure for moving / removing these items and 'promoting'
>>> the other Data Domain to Master?
>>>
>>> Our current version is:
>>>
>>> oVirt Engine Version: 3.5.0.1-1.el6 (it's old but reliable)
>>>
>>> Best regards,
>>>
>>> ***
>>> *Mark Steele*
>>> CIO / VP Technical Operations | TelVue Corporation
>>> TelVue - We Share Your Vision
>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
>>> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
>>> twitter: http://twitter.com/telvue | facebook:
>>> https://www.facebook.com/telvue
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDIZ3S7DDBZ5D4QUNOG5D7L3QTJI7YFU/
>>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRMHTRRDKKGELXBYT4RYPGUULPQ7GFI5/


[ovirt-users] Re: Locked Snapshot

2019-09-11 Thread Eyal Shenitzky
Hi Peter,

I think that you hit -
teardownImage attempts to deactivate in-use LV's rendering the VM disk
image/volumes in locked state -
https://bugzilla.redhat.com/show_bug.cgi?id=1749944

A fix should be published soon.

On Wed, 11 Sep 2019 at 12:34, Peter Smith 
wrote:

> Hi,
>
>
>
> I have been trying to get a backup solution working for my VM’s, and have
> somehow managed to break the snapshot on one of my VM’s. I’m still fairly
> new to oVirt, we’re migrating from VMware. I have had this cluster running
> since May.
>
>
>
> The cluster:
>
> We are running oVirt on CentOS 7, with Hosted Engine.
>
> oVirt 4.3.5.4-1.el7 (according to the Help->About in the Admin Console)
>
> Hardware is 5 x Dell M620 Blades (300Gb internal storage for host OS) with
> ISCSI storage for the VM’s (Dell M3660i), 10GB Network throughout.
>
>
>
> The VM in question is allocated 50Gb  of storage and there is 461GB free
> in the storage domain.
>
>
>
> So I’ve been playing with bareos 18.2.5 with the ovirt plugin found here
> https://github.com/eurotux/bareos-contrib/tree/master/fd-plugins/ovirt-plugin
>
>
>
> I got bareos hooked up to ovirt. It created a snapshot, downloaded it,
> then at the end it failed to remove the snapshot. I though it may have been
> because I was looking round bareos while it was doing the backup. Went into
> oVirt deleted the snapshot no issues. Re ran the backup job in bareos,
> leaving bareos alone this time. The backup job failed again being unable to
> remove the snapshot. So I figured I’d go in and delete it, but I get an
> error saying “Cannot remove Snapshot: The following disks are locked:
> VM-NAME_vmdisk1. Please try again in a few minutes.”. In the Admin console
> if I expand the disks on the snapshot it shows that the image is locked.
> And before you ask the bareos user is setup as a SuperUser. If I go to the
> Events from the menu on the left of the admin console I can see that (Sorry
> Names/IP’s changed to protect the guilty):
>
>
>
> Sep 10, 2019, 10:42:39 AM Image Download with disk VM-NAME_vmdisk1
> succeeded.
>
> Sep 10, 2019, 10:42:38 AM VDSM hyper2 command TeardownImageVDS failed:
> Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'
> Logical volume
> 46a9dfd4-4871-46b2-a740-8fe430c70b4b/bbe315b9-e797-4cf8-b18b-6afeb33589cc
> in use.\', \'  Logical volume
> 46a9dfd4-4871-46b2-a740-8fe430c70b4b/d3dfa2c2-45fa-4081-98c5-890dd7a5990a
> in
> use.\']\\n46a9dfd4-4871-46b2-a740-8fe430c70b4b/[\'bbe315b9-e797-4cf8-b18b-6afeb33589cc\',
> \'d3dfa2c2-45fa-4081-98c5-890dd7a5990a\']",)',)
>
> Sep 10, 2019, 10:22:09 AM Image Download with disk VM-NAME_vmdisk1 was
> initiated by bareos@internal-authz.
>
> Sep 10, 2019, 10:21:55 AM Snapshot
> 'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' creation for VM
> 'VM-NAME' has been completed.
>
> Sep 10, 2019, 10:21:36 AM Snapshot
> 'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' creation for VM
> 'VM-NAME' was initiated by bareos@internal-authz.
>
> Sep 10, 2019, 10:21:36 AM Backup of virtual machine 'VM-NAME using
> snapshot 'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' is starting.
>
> Sep 10, 2019, 10:21:36 AM User bareos@internal-authz connecting from
> '10.***' using session '' logged in.
>
>
>
>
>
> To me it looks like bareos tried to delete the snapshot whilst it was
> still downloading it, which failed as expected.
>
>
>
> Does anyone know how to fix/recover from this?
>
>
>
> If you need log files, please could you tell me which ones?
>
>
>
> Regards,
>
>
>
> Peter Smith
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L5ZW7CHMFQGKKJRP3Q3UQIOZOTZKIXPI/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MDLQFON5OCRQFQO26ZL3QOCVXJH3ZESX/


[ovirt-users] Re: How to change Master Data Storage Domain

2019-09-11 Thread Mark Steele
OK - so if I understand correctly - now that everything is moved off of the
Master, I can put it into maintenance mode and this will cause a new Master
election - am I prompted for the new master or does it select another
available data domain?

Thanks

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue


On Wed, Sep 11, 2019 at 5:19 AM Benny Zlotnik  wrote:

> You don't need to remove it, the vm data will be available on the
> OVF_STORE disks on the other SD, where you copied the disks to.
> Once you put the domain in maintenance and new master SD will be elected.
>
> On Wed, Sep 11, 2019 at 12:13 PM Mark Steele  wrote:
>
>> Good morning,
>>
>> I have a Storage Domain that I would like to retire and move that role to
>> a new storage domain. Both Domains exist on my current Data Center and I
>> have moved all disks from the existing Data (Master) domain to the new Data
>> Domain.
>>
>> The only thing that is still associated with the Data (Master) domain are
>> two OVF_STORE items:
>>
>> Alias
>> Virtual Size
>> Actual Size
>> Allocation Policy
>> Storage Domain
>> Storage Type
>> Creation Date
>> Attached To
>> Alignment
>> Status
>> Description
>> OVF_STORE
>> < 1 GB
>> < 1 GB
>> Preallocated
>> phl-datastore
>> NFS
>> 2014-Nov-14, 20:00
>> Unknown
>> OK
>> OVF_STORE
>> OVF_STORE
>> < 1 GB
>> < 1 GB
>> Preallocated
>> phl-datastore
>> NFS
>> 2014-Nov-14, 20:00
>> Unknown
>> OK
>> OVF_STORE
>>
>> What is the procedure for moving / removing these items and 'promoting'
>> the other Data Domain to Master?
>>
>> Our current version is:
>>
>> oVirt Engine Version: 3.5.0.1-1.el6 (it's old but reliable)
>>
>> Best regards,
>>
>> ***
>> *Mark Steele*
>> CIO / VP Technical Operations | TelVue Corporation
>> TelVue - We Share Your Vision
>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
>> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
>> twitter: http://twitter.com/telvue | facebook:
>> https://www.facebook.com/telvue
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDIZ3S7DDBZ5D4QUNOG5D7L3QTJI7YFU/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C67RCV5Z372IBN3ATCUBRFEKA5Y2WKSU/


[ovirt-users] Locked Snapshot

2019-09-11 Thread Peter Smith
Hi,

I have been trying to get a backup solution working for my VM's, and have 
somehow managed to break the snapshot on one of my VM's. I'm still fairly new 
to oVirt, we're migrating from VMware. I have had this cluster running since 
May.

The cluster:
We are running oVirt on CentOS 7, with Hosted Engine.
oVirt 4.3.5.4-1.el7 (according to the Help->About in the Admin Console)
Hardware is 5 x Dell M620 Blades (300Gb internal storage for host OS) with 
ISCSI storage for the VM's (Dell M3660i), 10GB Network throughout.

The VM in question is allocated 50Gb  of storage and there is 461GB free in the 
storage domain.

So I've been playing with bareos 18.2.5 with the ovirt plugin found here 
https://github.com/eurotux/bareos-contrib/tree/master/fd-plugins/ovirt-plugin

I got bareos hooked up to ovirt. It created a snapshot, downloaded it, then at 
the end it failed to remove the snapshot. I though it may have been because I 
was looking round bareos while it was doing the backup. Went into oVirt deleted 
the snapshot no issues. Re ran the backup job in bareos, leaving bareos alone 
this time. The backup job failed again being unable to remove the snapshot. So 
I figured I'd go in and delete it, but I get an error saying "Cannot remove 
Snapshot: The following disks are locked: VM-NAME_vmdisk1. Please try again in 
a few minutes.". In the Admin console if I expand the disks on the snapshot it 
shows that the image is locked. And before you ask the bareos user is setup as 
a SuperUser. If I go to the Events from the menu on the left of the admin 
console I can see that (Sorry Names/IP's changed to protect the guilty):

Sep 10, 2019, 10:42:39 AM Image Download with disk VM-NAME_vmdisk1 succeeded.
Sep 10, 2019, 10:42:38 AM VDSM hyper2 command TeardownImageVDS failed: Cannot 
deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical 
volume 
46a9dfd4-4871-46b2-a740-8fe430c70b4b/bbe315b9-e797-4cf8-b18b-6afeb33589cc in 
use.\', \'  Logical volume 
46a9dfd4-4871-46b2-a740-8fe430c70b4b/d3dfa2c2-45fa-4081-98c5-890dd7a5990a in 
use.\']\\n46a9dfd4-4871-46b2-a740-8fe430c70b4b/[\'bbe315b9-e797-4cf8-b18b-6afeb33589cc\',
 \'d3dfa2c2-45fa-4081-98c5-890dd7a5990a\']",)',)
Sep 10, 2019, 10:22:09 AM Image Download with disk VM-NAME_vmdisk1 was 
initiated by bareos@internal-authz.
Sep 10, 2019, 10:21:55 AM Snapshot 
'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' creation for VM 'VM-NAME' 
has been completed.
Sep 10, 2019, 10:21:36 AM Snapshot 
'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' creation for VM 'VM-NAME' 
was initiated by bareos@internal-authz.
Sep 10, 2019, 10:21:36 AM Backup of virtual machine 'VM-NAME using snapshot 
'VM-NAME-backup-1f952494-8245-479e-8d1d-9503d6fe8f9f' is starting.
Sep 10, 2019, 10:21:36 AM User bareos@internal-authz connecting from 
'10.***' using session '' logged in.


To me it looks like bareos tried to delete the snapshot whilst it was still 
downloading it, which failed as expected.

Does anyone know how to fix/recover from this?

If you need log files, please could you tell me which ones?

Regards,

Peter Smith
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L5ZW7CHMFQGKKJRP3Q3UQIOZOTZKIXPI/


[ovirt-users] Re: How to change Master Data Storage Domain

2019-09-11 Thread Benny Zlotnik
You don't need to remove it, the vm data will be available on the OVF_STORE
disks on the other SD, where you copied the disks to.
Once you put the domain in maintenance and new master SD will be elected.

On Wed, Sep 11, 2019 at 12:13 PM Mark Steele  wrote:

> Good morning,
>
> I have a Storage Domain that I would like to retire and move that role to
> a new storage domain. Both Domains exist on my current Data Center and I
> have moved all disks from the existing Data (Master) domain to the new Data
> Domain.
>
> The only thing that is still associated with the Data (Master) domain are
> two OVF_STORE items:
>
> Alias
> Virtual Size
> Actual Size
> Allocation Policy
> Storage Domain
> Storage Type
> Creation Date
> Attached To
> Alignment
> Status
> Description
> OVF_STORE
> < 1 GB
> < 1 GB
> Preallocated
> phl-datastore
> NFS
> 2014-Nov-14, 20:00
> Unknown
> OK
> OVF_STORE
> OVF_STORE
> < 1 GB
> < 1 GB
> Preallocated
> phl-datastore
> NFS
> 2014-Nov-14, 20:00
> Unknown
> OK
> OVF_STORE
>
> What is the procedure for moving / removing these items and 'promoting'
> the other Data Domain to Master?
>
> Our current version is:
>
> oVirt Engine Version: 3.5.0.1-1.el6 (it's old but reliable)
>
> Best regards,
>
> ***
> *Mark Steele*
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
> twitter: http://twitter.com/telvue | facebook:
> https://www.facebook.com/telvue
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDIZ3S7DDBZ5D4QUNOG5D7L3QTJI7YFU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XB7JNHS62JMEYZRPZQNX5YGQLPKABE76/


[ovirt-users] Re: How to change Master Data Storage Domain

2019-09-11 Thread Eyal Shenitzky
Hi Mark,

The only option to move the 'master' storage domain to another storage
domain is to deactivate the current 'master' storage domain.

Thanks,

On Wed, 11 Sep 2019 at 12:12, Mark Steele  wrote:

> Good morning,
>
> I have a Storage Domain that I would like to retire and move that role to
> a new storage domain. Both Domains exist on my current Data Center and I
> have moved all disks from the existing Data (Master) domain to the new Data
> Domain.
>
> The only thing that is still associated with the Data (Master) domain are
> two OVF_STORE items:
>
> Alias
> Virtual Size
> Actual Size
> Allocation Policy
> Storage Domain
> Storage Type
> Creation Date
> Attached To
> Alignment
> Status
> Description
> OVF_STORE
> < 1 GB
> < 1 GB
> Preallocated
> phl-datastore
> NFS
> 2014-Nov-14, 20:00
> Unknown
> OK
> OVF_STORE
> OVF_STORE
> < 1 GB
> < 1 GB
> Preallocated
> phl-datastore
> NFS
> 2014-Nov-14, 20:00
> Unknown
> OK
> OVF_STORE
>
> What is the procedure for moving / removing these items and 'promoting'
> the other Data Domain to Master?
>
> Our current version is:
>
> oVirt Engine Version: 3.5.0.1-1.el6 (it's old but reliable)
>
> Best regards,
>
> ***
> *Mark Steele*
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
> twitter: http://twitter.com/telvue | facebook:
> https://www.facebook.com/telvue
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDIZ3S7DDBZ5D4QUNOG5D7L3QTJI7YFU/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXNGACSQ6GTI6OJ7IVI5774NSYACTPR4/


[ovirt-users] How to change Master Data Storage Domain

2019-09-11 Thread Mark Steele
Good morning,

I have a Storage Domain that I would like to retire and move that role to a
new storage domain. Both Domains exist on my current Data Center and I have
moved all disks from the existing Data (Master) domain to the new Data
Domain.

The only thing that is still associated with the Data (Master) domain are
two OVF_STORE items:

Alias
Virtual Size
Actual Size
Allocation Policy
Storage Domain
Storage Type
Creation Date
Attached To
Alignment
Status
Description
OVF_STORE
< 1 GB
< 1 GB
Preallocated
phl-datastore
NFS
2014-Nov-14, 20:00
Unknown
OK
OVF_STORE
OVF_STORE
< 1 GB
< 1 GB
Preallocated
phl-datastore
NFS
2014-Nov-14, 20:00
Unknown
OK
OVF_STORE

What is the procedure for moving / removing these items and 'promoting' the
other Data Domain to Master?

Our current version is:

oVirt Engine Version: 3.5.0.1-1.el6 (it's old but reliable)

Best regards,

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VDIZ3S7DDBZ5D4QUNOG5D7L3QTJI7YFU/


[ovirt-users] Re: ovirt with glusterfs data domain - very slow writing speed on Windows server virtual machine

2019-09-11 Thread Sahina Bose
More details on the tests you ran, and also gluster profile data while you
were running the tests can help analyse.
Similar to my request to another user on thread, you can also help provide
some feedback on the data gathering ansible scripts by trying out
https://github.com/gluster/gluster-ansible-maintenance/pull/4
These scripts gather data from the hosts, and perform I/O tests on the
VM. +Sachidananda
URS 

On Thu, Aug 15, 2019 at 1:39 PM 
wrote:

> Hi folks,
>
>  I have been experimenting with oVirt cluster based on glusterfs for the
> past few days. (first-timer). The cluster is up and running and it consists
> of 4 nodes and has 4 replicas. When I try to deploy Windows Server VM I
> encounter the following issue: The disk of the VM has ok reading speed (
> close to bare metal) but the writing speed is very slow. ( about 10 times
> slower than it is supposed to be). Can anyone give me any suggestion,
> please? Thanks in advance! Here are the settings of the glusterfs volume:
>
> Volume Name: bottle-volume
> Type: Replicate
> Volume ID: 869b8d1e-1266-4820-8dcd-4fea92346b90
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 4 = 4
> Transport-type: tcp
> Bricks:
> Brick1:
> cnode01.bottleship.local:/gluster_bricks/brick-cnode01/brick-cnode01
> Brick2:
> cnode02.bottleship.local:/gluster_bricks/brick-cnode02/brick-cnode02
> Brick3:
> cnode03.bottleship.local:/gluster_bricks/brick-cnode03/brick-cnode03
> Brick4:
> cnode04.bottleship.local:/gluster_bricks/brick-cnode04/brick-cnode04
> Options Reconfigured:
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> server.event-threads: 4
> client.event-threads: 4
> cluster.choose-local: off
> features.shard: on
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: off
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> auth.allow: *
> user.cifs: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> server.allow-insecure: on
>
> Please let me know if you need any more configuration info or hardware
> specs.
>
> best,
>
> Lyubo
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JRD7377WQX6FQAZRLJ6YUHIYRRAFZLIY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDZG4R4LUMWEUSY3SWCDWS7PIAK2JYN6/


[ovirt-users] CentOS 6 in oVirt

2019-09-11 Thread Dominik Holler
Hello,
are the official CentOS 6 cloud images known to work in oVirt?
I tried, but they seem not to work in oVirt [1], but on plain libvirt.
Are there any pitfalls known during using them in oVirt?
Thanks
Dominik


[1]
  https://gerrit.ovirt.org/#/c/101117/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TN5SEECVFGCCMQ2QACYYEZXNLFGBYDZH/


[ovirt-users] Re: Does cluster upgrade wait for heal before proceeding to next host?

2019-09-11 Thread Sahina Bose
On Fri, Aug 9, 2019 at 3:41 PM Martin Perina  wrote:

>
>
> On Thu, Aug 8, 2019 at 10:25 AM Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno mar 6 ago 2019 alle ore 23:17 Jayme  ha
>> scritto:
>>
>>> I’m aware of the heal process but it’s unclear to me if the update
>>> continues to run while the volumes are healing and resumes when they are
>>> done. There doesn’t seem to be any indication in the ui (unless I’m
>>> mistaken)
>>>
>>
>> Adding @Martin Perina  , @Sahina Bose
>>and @Laura Wright   on this,
>> hyperconverged deployments using cluster upgrade command would probably
>> need some improvement.
>>
>
> The cluster upgrade process continues to the 2nd host after the 1st host
> becomes Up. If 2nd host then fails to switch to maintenance, we stop the
> upgrade process to prevent breakage.
> Sahina, is gluster healing process status exposed in RESTAPI? If so, does
> it makes sense to wait for healing to be finished before trying to move
> next host to maintenance? Or any other ideas how to improve?
>

I need to cross-check this, if we expose the heal count in the gluster
bricks. Moving a host to maintenance does check if there are pending heal
entries or possibility of quorum loss. And this would prevent the
additional hosts to upgrade.
+Gobinda Das  +Sachidananda URS 


>>
>>
>>>
>>> On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane  wrote:
>>>
 Hello,

 Often(?), updates to a hypervisor that also has (provides) a Gluster
 brick takes the hypervisor offline (updates often require a reboot).

 This reboot then makes the brick "out of sync" and it has to be
 resync'd.

 I find it a "feature" than another host that is also part of a gluster
 domain can not be updated (rebooted) before all the bricks are updated
 in order to guarantee there is not data loss. It is called Quorum, or?

 Always let the heal process end. Then the next update can start.
 For me there is ALWAYS a healing time before Gluster is happy again.

 Cheers,

 Robert O'Kane


 Am 06.08.2019 um 16:38 schrieb Shani Leviim:
 > Hi Jayme,
 > I can't recall such a healing time.
 > Can you please retry and attach the engine & vdsm logs so we'll be
 smarter?
 >
 > *Regards,
 > *
 > *Shani Leviim
 > *
 >
 >
 > On Tue, Aug 6, 2019 at 5:24 PM Jayme >>> > > wrote:
 >
 > I've yet to have cluster upgrade finish updating my three host HCI
 > cluster.  The most recent try was today moving from oVirt 4.3.3 to
 > 4.3.5.5.  The first host updates normally, but when it moves on to
 > the second host it fails to put it in maintenance and the cluster
 > upgrade stops.
 >
 > I suspect this is due to that fact that after my hosts are updated
 > it takes 10 minutes or more for all volumes to sync/heal.  I have
 > 2Tb SSDs.
 >
 > Does the cluster upgrade process take heal time in to account
 before
 > attempting to place the next host in maintenance to upgrade it? Or
 > is there something else that may be at fault here, or perhaps a
 > reason why the heal process takes 10 minutes after reboot to
 complete?
 > ___
 > Users mailing list -- users@ovirt.org 
 > To unsubscribe send an email to users-le...@ovirt.org
 > 
 > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 > oVirt Code of Conduct:
 > https://www.ovirt.org/community/about/community-guidelines/
 > List Archives:
 >
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XM3QB3364ZYIPAKY4KTTOSJZMCWHUPD/
 >
 >
 > ___
 > Users mailing list -- users@ovirt.org
 > To unsubscribe send an email to users-le...@ovirt.org
 > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 > oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 > List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBX3L23MWGMTF7Q4KGVR63RIQZFYXGWK/
 >

 --
 Systems Administrator
 Kunsthochschule für Medien Köln
 Peter-Welter-Platz 2
 50676 Köln
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBAHFFFTDOI7LHAH5AVI5OPUQUQTABWM/

>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to 

[ovirt-users] Re: oVirt 4.3.5 WARN no gluster network found in cluster

2019-09-11 Thread Sahina Bose
On Mon, Aug 26, 2019 at 10:57 PM  wrote:

> Hi,
> Yes I have glusternet shows the role  of "migration" and "gluster".
> Hosts show 1 network connected to management and the other to logical
> network "glusternet"
>

What does "ping host1.example.com" return? Does it return the IP address of
the network associated with glusternet?
The hostname in the brick - in your case "host1.example.com" from
host1.example.com:/gluster_bricks/engine/engine, should be resolvable to a
valid address from the engine host.


> Just not sure if I am interpreting right?
> thanks,
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MFQE6XRVHA76PI47O253VHV4EK3M5QDQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6RJ6FGOPP4LIABP5NINSDHISLTVHXJVU/


[ovirt-users] Re: oVirt 4.3.5 glusterfs 6.3 performance tunning

2019-09-11 Thread Sahina Bose
+Krutika Dhananjay  +Sachidananda URS 


Adrian, can you provide more details on the performance issue you're seeing?
We do have some scripts to collect data to analyse. Perhaps you can run
this and provide us the details in a bug. The ansible scripts to do this is
still under review -
https://github.com/gluster/gluster-ansible-maintenance/pull/4
But would be great if you can provide us some feedback on this as well.

On Thu, Aug 22, 2019 at 7:50 AM  wrote:

> Hello,
> I have a hyperconverged setup using ovirt 4.3.5 and the "optimize for
> ovirt store" seems to fail on gluster volumes.
> I am seeing poor performance and trying to see how should I tune gluster
> to give better performance.
> Can you provide any suggestions on the following volume
> settings(parameters)?
>
> option Value
> -- -
> cluster.lookup-unhashedon
> cluster.lookup-optimizeon
> cluster.min-free-disk  10%
> cluster.min-free-inodes5%
> cluster.rebalance-statsoff
> cluster.subvols-per-directory  (null)
> cluster.readdir-optimize   off
> cluster.rsync-hash-regex   (null)
> cluster.extra-hash-regex   (null)
> cluster.dht-xattr-name trusted.glusterfs.dht
> cluster.randomize-hash-range-by-gfid   off
> cluster.rebal-throttle normal
> cluster.lock-migration off
> cluster.force-migrationoff
> cluster.local-volume-name  (null)
> cluster.weighted-rebalance on
> cluster.switch-pattern (null)
> cluster.entry-change-log   on
> cluster.read-subvolume (null)
> cluster.read-subvolume-index-1
> cluster.read-hash-mode  1
> cluster.background-self-heal-count  8
> cluster.metadata-self-heal off
> cluster.data-self-heal off
> cluster.entry-self-healoff
> cluster.self-heal-daemon   on
> cluster.heal-timeout600
> cluster.self-heal-window-size   1
> cluster.data-change-logon
> cluster.metadata-change-logon
> cluster.data-self-heal-algorithm   full
> cluster.eager-lock enable
> disperse.eager-lockon
> disperse.other-eager-lock  on
> disperse.eager-lock-timeout 1
> disperse.other-eager-lock-timeout   1
> cluster.quorum-typeauto
> cluster.quorum-count   (null)
> cluster.choose-local   off
> cluster.self-heal-readdir-size 1KB
> cluster.post-op-delay-secs  1
> cluster.ensure-durability  on
> cluster.consistent-metadatano
> cluster.heal-wait-queue-length  128
> cluster.favorite-child-policy  none
> cluster.full-lock  yes
> diagnostics.latency-measurementoff
> diagnostics.dump-fd-stats  off
> diagnostics.count-fop-hits off
> diagnostics.brick-log-levelINFO
> diagnostics.client-log-level   INFO
> diagnostics.brick-sys-log-levelCRITICAL
> diagnostics.client-sys-log-level   CRITICAL
> diagnostics.brick-logger   (null)
> diagnostics.client-logger  (null)
> diagnostics.brick-log-format   (null)
> diagnostics.client-log-format  (null)
> diagnostics.brick-log-buf-size  5
> diagnostics.client-log-buf-size 5
> diagnostics.brick-log-flush-timeout 120
> diagnostics.client-log-flush-timeout120
> diagnostics.stats-dump-interval 0
> diagnostics.fop-sample-interval 0
> diagnostics.stats-dump-format  json
> diagnostics.fop-sample-buf-size 65535
> diagnostics.stats-dnscache-ttl-sec  86400
> performance.cache-max-file-size 0
> performance.cache-min-file-size 0
> performance.cache-refresh-timeout   1
> performance.cache-priority
> performance.cache-size 32MB
> performance.io-thread-count 16
> performance.high-prio-threads   16
> performance.normal-prio-threads 16
> performance.low-prio-threads32
> performance.least-prio-threads  1
> performance.enable-least-priority  on
> performance.iot-watchdog-secs  (null)
> performance.iot-cleanup-disconnected-   reqsoff
> performance.iot-pass-through   false
> performance.io-cache-pass-through  false
> performance.cache-size 128MB
> performance.qr-cache-timeout1
> performance.cache-invalidation false
> performance.ctime-invalidation false
> performance.flush-behind   on
> performance.nfs.flush-behind   on
> performance.write-behind-window-size   1MB
> performance.resync-failed-syncs-after   -fsyncoff
> performance.nfs.write-behind-window-s   ize1MB
> performance.strict-o-directon
> performance.nfs.strict-o-directoff
> performance.strict-write-ordering  off
> performance.nfs.strict-write-ordering  off
> performance.write-behind-trickling-wr   iteson
> performance.aggregate-size 128KB
> performance.nfs.write-behind-tricklin   g-writeson
> performance.lazy-open  yes
> performance.read-after-openyes
> performance.open-behind-pass-through   false
> performance.read-ahead-page-count   4
> performance.read-ahead-pass-throughfalse
> performance.readdir-ahead-pass-throug   h  false
>