[ovirt-users] Re: ovirt 4.2 failed deploy

2018-05-15 Thread Phillip Bailey
Alex,

I haven't run into any issues with ovirt-ha-agent. I'm adding Simone who
may have a better idea of what could be causing the problem. Could you
provide any logs you have available from that deployment? Also, could you
please run "journalctl -u ovirt-ha-agent" on that host and provide the
output?

Thanks!

-Phillip Bailey

On Tue, May 15, 2018 at 9:22 AM, Alex K  wrote:

> Hi Philip,
>
> I finally was not able to complete it.
> The ovirt ha agent at host was not starting for some reason.
> It could be because I ran a hosted-engine-cleanup earlier.
> So I need to repeat from scratch to be able to reproduce/verify.
>
> Alex
>
>
>
> On Tue, May 15, 2018 at 2:48 PM, Phillip Bailey 
> wrote:
>
>> Alex,
>>
>> I'm glad to hear you were able to get everything running! Please let us
>> know if you have any issues going forward.
>>
>> Best regards,
>>
>> -Phillip Bailey
>>
>> On Tue, May 15, 2018 at 4:59 AM, Alex K  wrote:
>>
>>> I overcame this with:
>>>
>>> run at host:
>>>
>>> /usr/sbin/ovirt-hosted-engine-cleanup
>>>
>>> Redeployed then engine
>>> engine-setup
>>>
>>> This time was ok.
>>>
>>> Thanx,
>>> Alex
>>>
>>> On Tue, May 15, 2018 at 10:51 AM, Alex K 
>>> wrote:
>>>
 Hi,

 Thanx for the feedback.

 *getent ahostsv4 v0.mydomain*

 gives:

 172.16.30.10STREAM v0
 172.16.30.10DGRAM
 172.16.30.10RAW

 which means that

 *getent ahostsv4 v0.mydomain | grep v0.mydomain*

 gives null

 I overcame this by using the flag *--noansible* to proceed with the
 python way and it did succeed.

 Now I am stuck at engine-setup create CA step. It never finishes and I
 see several errors at setup log (grep -iE 'error|fail' ):

 2018-05-15 03:40:03,749-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV BASE/error=bool:'False'
 2018-05-15 03:40:03,751-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
 2018-05-15 03:40:04,338-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV BASE/error=bool:'False'
 2018-05-15 03:40:04,339-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
 2018-05-15 03:40:04,532-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV OVESETUP_CORE/failOnDulicatedC
 onstant=bool:'False'
 2018-05-15 03:40:04,809-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/postgres
 ExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>,
 'check_on_use': True, 'needed_on_create': True, 'key':
 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg':
 '{key} required to be at most {expected}'}, {'ok':  at
 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg':
 '{key} required to be at most {expected}'}, {'ok':  at
 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
 at least {expected}'}, {'ok':  at 0x7ff163099488>,
 'check_on_use': True, 'needeOperationalError: FATAL:  *password
 authentication failed for user "engine"*
 FATAL:  password authentication failed for user "engine"
 2018-05-15 03:40:11,408-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV BASE/error=bool:'False'
 2018-05-15 03:40:11,417-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
 2018-05-15 03:40:11,441-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV OVESETUP_CORE/failOnDulicatedC
 onstant=bool:'False'
 2018-05-15 03:40:11,457-0400 DEBUG otopi.context
 context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/postgres
 ExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>,
 'check_on_use': True, 'needed_on_create': True, 'key':
 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg':
 '{key} required to be at most {expected}'}, {'ok':  at
 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg':
 '{key} required to be at most {expected}'}, {'ok':  at
 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
 at least {expected}'}, {'ok':  at 0x7ff163099488>,
 'check_on_use': True, 'needed_on_create': True, 'key':
 'maintenance_work_mem', 'expected': 65536, 'error_msg': '{key} required to
 be at least {expected}', 'useQueryForValue': True}, {'ok': >>>  at 0x7ff163099500>, 'check_on_use': True, 'needed_on_create':
 True, 'key': 'work_mem', 'expected': 8192, 'error_msg': '{key} required to
 be at least 

[ovirt-users] Re: vGPU setup guide

2018-05-15 Thread Don Dupuis
Nvidia released GRID version 6.1 this evening which supports RHEV and OVIRT
with vGPU

On Tue, May 15, 2018 at 12:12 PM, Callum Smith  wrote:

> OK I guess it was literally just a breath away:
> https://blogs.nvidia.com/blog/2018/05/15/red-hat-
> virtualization-vgpu-support/
>
> So based on it now being actually supported, is this guide still relevant?
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
> On 14 May 2018, at 21:48, Don Dupuis  wrote:
>
> No, if you look at support matrix, there is no rhev/ovirt. RHEL KVM only
> supports pass through, not vGPU!! That driver only support 1 to 1 pass
> through, no vGPU profiles. I hope it get released soon as when RHEV 4.2
> gets released.
>
> Don
>
> On Mon, May 14, 2018 at 3:33 PM, Callum Smith 
> wrote:
>
>> That should be fine then, because they have done, right?
>>
>> https://docs.nvidia.com/grid/6.0/product-support-matrix/
>>
>> And inside my product manager for NVIDIA i can download "NVIDIA vGPU for
>> RHEL KVM" which comes with the hypervisor driver.
>>
>> Regards,
>> Callum
>>
>> --
>>
>> Callum Smith
>> Research Computing Core
>> Wellcome Trust Centre for Human Genetics
>> University of Oxford
>> e. cal...@well.ox.ac.uk
>>
>> > On 14 May 2018, at 21:19, Don Dupuis  wrote:
>> >
>> > Nvidia vGPU support won't work until Nvidia releases hypervisor drivers
>> for RHEV/oVirt.
>> >
>> > Don
>> >
>> > On Mon, May 14, 2018 at 3:08 PM, Callum Smith 
>> wrote:
>> > Dear All,
>> >
>> > IS this the most current and useful example of implenting vGPUs in
>> oVirt? I had understood that 4.2 had NVIDIA GRID support as a flagship
>> feature, but this appears to be 4.1.4? It seems a very reasonable and
>> decent guide, just don't want to go down this route if there's alternatives
>> now available in 4.2.x.
>> >
>> > https://mpolednik.github.io/2017/09/13/vgpu-in-ovirt/
>> >
>> > Regards,
>> > Callum
>> >
>> > --
>> >
>> > Callum Smith
>> > Research Computing Core
>> > Wellcome Trust Centre for Human Genetics
>> > University of Oxford
>> > e. cal...@well.ox.ac.uk
>> >
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> >
>> >
>>
>>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Upgrade from 4.1 to 4.2.3 failed

2018-05-15 Thread John Florian
On 05/15/2018 01:26 AM, Sahina Bose wrote:
>
>
> On Tue, May 15, 2018 at 5:02 AM, John Florian  > wrote:
>
> When I run engine-setup, I get this error:
>
> [ ERROR ] Yum [u'Errors were encountered while downloading
> packages.', u'gdeploy-2.0.6-1.el7.noarch: failure:
> gdeploy-2.0.6-1.el7.noarch.rpm from ovirt-4.1-centos-gluster38:
> [Errno 256] No more mirrors to
> 
> try.\nhttp://mirror.centos.org/centos/7/storage/x86_64/gluster-3.8/gdeploy-2.0.6-1.el7.noarch.rpm
> 
> :
> [Errno 14] HTTP Error 404 - Not Found']
> [ ERROR ] Failed to execute stage 'Package installation':
> [u'Errors were encountered while downloading packages.',
> u'gdeploy-2.0.6-1.el7.noarch: failure:
> gdeploy-2.0.6-1.el7.noarch.rpm from ovirt-4.1-centos-gluster38:
> [Errno 256] No more mirrors to
> 
> try.\nhttp://mirror.centos.org/centos/7/storage/x86_64/gluster-3.8/gdeploy-2.0.6-1.el7.noarch.rpm
> 
> :
> [Errno 14] HTTP Error 404 - Not Found']
>
> If I go exploring with my browser, it doesn't appear that
> http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.8
> 
> exists any more.  The oldest there is 3.10.  I didn't see any
> mention of needing to revise this repo config, but I obviously
> must have missed something.
>
>
> This is discussed in thread
> https://www.mail-archive.com/users@ovirt.org/msg48116.html
>
> You can edit the repo file to add the latest 3.12 gluster repo

Seeing that the 3.12 repo was enabled as part of
ovirt-release42-4.2.3-1.el7.centos.noarch I just disabled the
unreachable repo for 3.8.  That got my engine upgraded to 4.2.3.

Now I'm trying to tackle my two hosts.  I put the first into maintenance
and then tried the upgrade through the web UI but this failed.  I tried
running a yum update from a shell on that host and found a similar
problem, now with ovirt-4.1 repo.  So I disabled that, retried the yum
update and got/applied lots of updates to bring the host up to CentOS
7.5.  I then retried the upgrade through the web UI and this is still
failing and I'm having trouble figuring out why.  What logs do I need to
be looking at for host upgrades?

-- 
John Florian

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Unable to connect to vm's via Console

2018-05-15 Thread Bryan Sockel
Not entirely sure what happened, but I am no longer able to connect to the 
console on any of my vm’s.  I receive the error “No CA Found!”.  
 
 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Preventing users to see other VMs

2018-05-15 Thread Peter Hudec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

I'm fancing the same problem.

The steps are
- - create user /tester/ using the ovirt-aaa-jdbc-tool
- - login as admin into admin portal
- - add tester user in Administation -> Users
- - choose one VM and add UserRole role

- - login as testr into User Potal
- - user could see all VM..

The problem could be, that the user is part of the group Everyone and
this group could be found in Administration -> Configure > System
Permissions. When you check the group permisson, it seems to be
automatically populated by engine.

In  my case I[m using default DC, default cluster and 'internal' profile
.

Seems that all engine object is included in Everyone group.

regards
Peter

On 15/05/2018 22:03, Roy Golan wrote:
> 
> 
> On Tue, 15 May 2018 at 21:47 Aziz  > wrote:
> 
> Hi Roy,
> 
> Thanks for your feedback, I'm unable to remove the user from the 
> cluster, I used the command "|ovirt-aaa-jdbc-tool user add|" to
> add the new user, and it seems that by default it took all
> permissions over the cluster. Is there any document describing this
> feature in details ?
> 
> 
> 
> In the webadmin go to Administration -> Configure > System
> Permissions. If the user is there, remove him. Then search for the
> VM and add permissions to the user on the VM Check your end result
> in the 'permisions' section of the VM to see who has permissions on
> it.
> 
> This should be helpful, quite long though 
> https://www.ovirt.org/documentation/admin-guide/chap-Users_and_Roles/
>
> 
This is for the tool itself
> https://www.ovirt.org/develop/release-management/features/infra/aaa-jd
bc/
>
> 
> 
> 
> Thanks
> 
> On Tue, May 15, 2018 at 6:31 PM, Roy Golan  > wrote:
> 
> 1. Make sure your users use the VM portal 2. Assign permission on
> VM to a certain user to make sure it apears in the portal. The Role
> should be VmOperator afaik.
> 
> Permission set on objects higher in the hierarchy are cascading, 
> i.e a user with permission on a cluster would have the permission
> on the all the vm in cluster.
> 
> 
> On Tue, 15 May 2018 at 20:59 Aziz  > wrote:
> 
> Hi list,
> 
> I'm trying to remove the default "everyone" user from Ovirt, so
> that each user can have access to its own interface to manage a
> unique VM. I wonder if this is possible, because so far I'm unable
> to remove everyone user.
> 
> Thank you
> 
> 
> ___ Users mailing list
> -- users@ovirt.org  To unsubscribe send an
> email to users-le...@ovirt.org 
> 
> 
> 
> 
> ___ Users mailing list
> -- users@ovirt.org To unsubscribe send an email to
> users-le...@ovirt.org
> 


- -- 
*Peter Hudec*
Infraštruktúrny architekt
phu...@cnc.sk 

*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2  35 000 100

Mobil:+421 905 997 203
*www.cnc.sk* 

-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlr7StYACgkQQnvVWOJ3
5BCYbQ//WiZTpgyHY6eD3kjtoomYu6UiuKCXYD0uhezUVFV7zROk85jp7BcoU847
MVRMKcu/5YOMBWyXpVy27OtQwCcquz5aChreYUH8zaPlH3O3qkf2ohziKsXlMAol
/75g+Ha+Zyueuv7afx+UIxgaDv7tkGWEnrXn5LBxuQjZqq1NLDMueQaD/fPwPlw+
SRXo4nGnvnsKIZGjsX+Otd73l8JlCr0apzYYC2KOHhM1Tfw2fRphPDk/zLOvjv2X
sxKrIWsK7OgBt5lDG0rzVj/qdf4SnsxXgbgvo03yc0MwBBX+NLRmwOLUjFiovze+
NwPuos87Iwo3Dv+wJ1oxYkAGgjl0t+TxbJP6SMwAH1g7T1jvA/aCeC/Bk7RXPldL
pI+cAqvNtNfidxx7CyKjgKn4MA3dT9lq95FOV1CgMP4xQcliqofOeZrW93dvDnE8
LBlni7okv1xjw3rj6MTjdkSCN+Hh8L5GY+WbZbx5An5aCVdkYjTNw0K5UWbBNxua
fAJKBf5UidYXjxSHxgE21JKscX0wzZUOtGn11qmXp/zAwvfn4yfIQzJiii2XCIZT
J9mcyb1084bGlK86wrRNLRMDAVkN4Rh3cWY2NRhe8hKpjOCqWC88QkmTi4SXjMRy
L/cOC+ea5/by1gCE5xKinaHNZaZDM/3rBYJW2HxJkCzdOBwxxIQ=
=cvu1
-END PGP SIGNATURE-
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Custom Intel AMT fencing question

2018-05-15 Thread Shawn Southern
Thanks for this!

I’ve got fence_amt_ws working fine, however the document you linked mentions 
creating a script, and I’m not sure how I’m to pass the various parameters 
(host to fence, etc.) to this script.  From the doc, I'm looking at:

engine-config -s CustomVdsFenceType="amt"
engine-config -s CustomVdsFenceOptionMapping="amt:port=ipport"
engine-config -s CustomFencePowerWaitParam="amt=power_wait"

Will this pass a parameter called ipport that has the IP address or hostname of 
the host to fence to my script (which in this case is /usr/sbin/fence_amt)?

- original message -
From: Martin Perina  
Sent: May 15, 2018 4:20 AM
To: Shawn Southern ; Eli Mesika 

Cc: users 
Subject: Re: [ovirt-users] Custom Intel AMT fencing question



On Mon, May 14, 2018 at 8:13 PM, Shawn Southern 
 wrote:
I'm now using Intel AMT and the wsmancli package to reboot/power off/power on 
my entry level systems... but now I want oVirt to use this for fencing.

I created 3 xml files: powercycle.xml (uses PowerState 10), poweron.xml (uses 
PowerState 2) and poweroff.xml (uses PowerState 8).  Here is the poweroff.xml 
file:
http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService;>
  8
  http://schemas.xmlsoap.org/ws/2004/08/addressing;
            xmlns:wsman="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd;>
    
http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous
    
      
http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ComputerSystem
      
        CIM_ComputerSystem
        ManagedSystem
      
    
  


I can then reboot or power on/off the server with:
wsman invoke -a RequestPowerStateChange 
http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService -h 
[AMT IP] -P 16992 -u admin -p [amt password] -J /fencing/poweron.xml  (or 
poweroff.xml, etc).

My question is, how do I move from this to using this for fencing in oVirt?

​At the moment oVirt doesn't officially support AMT as fence agent. But I've 
just looked that on CentOS 7 we already have fence-agents-amt-ws package, so 
please try to install fence-agents-amt-ws package and test if it's working for 
your server​.

If above agent is working fine, then please take a look Custom Fencing oVirt 
feature [1], which should allow you to use fence_agent_amt_ws agent in oVirt. 
Am I right Eli?

Regards

Martin


[1] https://www.ovirt.org/develop/developer-guide/engine/custom-fencing/


Thanks!
___
Users mailing list -- mailto:users@ovirt.org
To unsubscribe send an email to mailto:users-le...@ovirt.org



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Preventing users to see other VMs

2018-05-15 Thread Roy Golan
On Tue, 15 May 2018 at 21:47 Aziz  wrote:

> Hi Roy,
>
> Thanks for your feedback, I'm unable to remove the user from the cluster,
> I used the command "ovirt-aaa-jdbc-tool user add" to add the new user,
> and it seems that by default it took all permissions over the cluster. Is
> there any document describing this feature in details ?
>
>

In the webadmin go to Administration -> Configure > System Permissions. If
the user is there, remove him. Then search for the VM and add permissions
to the user on the VM
Check your end result in the 'permisions' section of the VM to see who has
permissions on it.

This should be helpful, quite long though
https://www.ovirt.org/documentation/admin-guide/chap-Users_and_Roles/
This is for the tool itself
https://www.ovirt.org/develop/release-management/features/infra/aaa-jdbc/



> Thanks
>
> On Tue, May 15, 2018 at 6:31 PM, Roy Golan  wrote:
>
>> 1. Make sure your users use the VM portal
>> 2. Assign permission on VM to a certain user to make sure it apears in
>> the portal. The Role should be VmOperator afaik.
>>
>> Permission set on objects higher in the hierarchy are cascading, i.e a
>> user with permission on a cluster would have the permission on the all the
>> vm in cluster.
>>
>>
>> On Tue, 15 May 2018 at 20:59 Aziz  wrote:
>>
>>> Hi list,
>>>
>>> I'm trying to remove the default "everyone" user from Ovirt, so that
>>> each user can have access to its own interface to manage a unique VM. I
>>> wonder if this is possible, because so far I'm unable to remove everyone
>>> user.
>>>
>>> Thank you
>>>
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>>
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Volume id incorrect

2018-05-15 Thread Marcelo Leandro
Description of problem:

I tried do an live storage migration but this failed, after I re-tried
and I had this error menssage.

storage origem (10.16.2.110)
storage destination (10.16.2.105)

HSMGetAllTasksStatusesVDS failed: Volume already exists:
('4072fd70-962c-40f7-898c-e7df5ad25725',)

this volume exists in the 2 storages. but in database show that the vm
run in the frist storage(origem).

find / -name 4072fd70-962c-40f7-898c-e7df5ad25725
/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725

/rhev/data-center/mnt/10.16.2.105:_home_nfs/3d65830f-afb3-4663-828c-42adff11de85/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725



but when a run this command  in the host that vm started:

ps aux | grep SEMARH*

Show that run another volume id:

-drive 
file=/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762

and this volume not exist:

qemu-img info 
/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762
qemu-img: Could not open
'/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762':
Could not open 
'/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762':
No such file or directory


qemu-img info 
/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762
qemu-img: Could not open
'/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762':
Could not open 
'/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/fe85c5da-10fe-4698-975d-22bfa6362762':
No such file or directory

If I run md5sum and hash not change:

md5sum 
/rhev/data-center/mnt/10.16.2.110\:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725
38021c8f670332f2f6ade6486b7a06eb
/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725

after 10 min:

md5sum 
/rhev/data-center/mnt/10.16.2.110\:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725
38021c8f670332f2f6ade6486b7a06eb
/rhev/data-center/mnt/10.16.2.110:_home_nfs/49b89dd4-b718-4a9e-8024-8d656ac9b3bd/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725


another storage:

24eeb2845cbfda238b78fa165c21607d
/rhev/data-center/mnt/10.16.2.105:_home_nfs/3d65830f-afb3-4663-828c-42adff11de85/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725

after 10 min:

24eeb2845cbfda238b78fa165c21607d
/rhev/data-center/mnt/10.16.2.105:_home_nfs/3d65830f-afb3-4663-828c-42adff11de85/images/67f7d6aa-d983-48c4-852b-7258d09c3881/4072fd70-962c-40f7-898c-e7df5ad25725



Version-Release number of selected component (if applicable):

oVirt engine - 4.1.9
Vdsm - 4.20.23


the vm run normaly and I already restart this.

Thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Preventing users to see other VMs

2018-05-15 Thread Aziz
Hi Roy,

Thanks for your feedback, I'm unable to remove the user from the cluster, I
used the command "ovirt-aaa-jdbc-tool user add" to add the new user, and it
seems that by default it took all permissions over the cluster. Is there
any document describing this feature in details ?


Thanks

On Tue, May 15, 2018 at 6:31 PM, Roy Golan  wrote:

> 1. Make sure your users use the VM portal
> 2. Assign permission on VM to a certain user to make sure it apears in the
> portal. The Role should be VmOperator afaik.
>
> Permission set on objects higher in the hierarchy are cascading, i.e a
> user with permission on a cluster would have the permission on the all the
> vm in cluster.
>
>
> On Tue, 15 May 2018 at 20:59 Aziz  wrote:
>
>> Hi list,
>>
>> I'm trying to remove the default "everyone" user from Ovirt, so that each
>> user can have access to its own interface to manage a unique VM. I wonder
>> if this is possible, because so far I'm unable to remove everyone user.
>>
>> Thank you
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Preventing users to see other VMs

2018-05-15 Thread Roy Golan
1. Make sure your users use the VM portal
2. Assign permission on VM to a certain user to make sure it apears in the
portal. The Role should be VmOperator afaik.

Permission set on objects higher in the hierarchy are cascading, i.e a user
with permission on a cluster would have the permission on the all the vm in
cluster.


On Tue, 15 May 2018 at 20:59 Aziz  wrote:

> Hi list,
>
> I'm trying to remove the default "everyone" user from Ovirt, so that each
> user can have access to its own interface to manage a unique VM. I wonder
> if this is possible, because so far I'm unable to remove everyone user.
>
> Thank you
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Preventing users to see other VMs

2018-05-15 Thread Aziz
Hi list,

I'm trying to remove the default "everyone" user from Ovirt, so that each
user can have access to its own interface to manage a unique VM. I wonder
if this is possible, because so far I'm unable to remove everyone user.

Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ovirt 4.2 failed deploy

2018-05-15 Thread Justin Zygmont
I wonder why this option isn’t just added to the hosted-engine command instead?


From: Alex K [mailto:rightkickt...@gmail.com]
Sent: Tuesday, May 15, 2018 2:00 AM
To: Phillip Bailey 
Cc: users 
Subject: [ovirt-users] Re: ovirt 4.2 failed deploy

I overcame this with:
run at host:

/usr/sbin/ovirt-hosted-engine-cleanup
Redeployed then engine
engine-setup
This time was ok.

Thanx,
Alex

On Tue, May 15, 2018 at 10:51 AM, Alex K 
> wrote:
Hi,
Thanx for the feedback.

getent ahostsv4 v0.mydomain

gives:

172.16.30.10STREAM v0
172.16.30.10DGRAM
172.16.30.10RAW
which means that

getent ahostsv4 v0.mydomain | grep v0.mydomain
gives null
I overcame this by using the flag --noansible to proceed with the python way 
and it did succeed.
Now I am stuck at engine-setup create CA step. It never finishes and I see 
several errors at setup log (grep -iE 'error|fail' ):

2018-05-15 03:40:03,749-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/error=bool:'False'
2018-05-15 03:40:03,751-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:40:04,338-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/error=bool:'False'
2018-05-15 03:40:04,339-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:40:04,532-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV OVESETUP_CORE/failOnDulicatedConstant=bool:'False'
2018-05-15 03:40:04,809-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV OVESETUP_PROVISIONING/postgresExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create': True, 
'key': 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key} 
required to be at most {expected}'}, {'ok':  at 
0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key': 
'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg': '{key} 
required to be at most {expected}'}, {'ok':  at 
0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key': 
'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be at 
least {expected}'}, {'ok':  at 0x7ff163099488>, 
'check_on_use': True, 'needeOperationalError: FATAL:  password authentication 
failed for user "engine"
FATAL:  password authentication failed for user "engine"
2018-05-15 03:40:11,408-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/error=bool:'False'
2018-05-15 03:40:11,417-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:40:11,441-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV OVESETUP_CORE/failOnDulicatedConstant=bool:'False'
2018-05-15 03:40:11,457-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV OVESETUP_PROVISIONING/postgresExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create': True, 
'key': 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key} 
required to be at most {expected}'}, {'ok':  at 
0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key': 
'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg': '{key} 
required to be at most {expected}'}, {'ok':  at 
0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key': 
'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be at 
least {expected}'}, {'ok':  at 0x7ff163099488>, 
'check_on_use': True, 'needed_on_create': True, 'key': 'maintenance_work_mem', 
'expected': 65536, 'error_msg': '{key} required to be at least {expected}', 
'useQueryForValue': True}, {'ok':  at 0x7ff163099500>, 
'check_on_use': True, 'needed_on_create': True, 'key': 'work_mem', 'expected': 
8192, 'error_msg': '{key} required to be at least {expected}', 
'useQueryForValue': True})'
raise RuntimeError("SIG%s" % signum)
RuntimeError: SIG2
raise RuntimeError("SIG%s" % signum)
RuntimeError: SIG2
2018-05-15 03:41:19,888-0400 ERROR otopi.context context._executeMethod:152 
Failed to execute stage 'Misc configuration': SIG2
2018-05-15 03:41:19,993-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/error=bool:'True'
2018-05-15 03:41:19,993-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/exceptionInfo=list:'[(, 
RuntimeError('SIG2',), )]'
2018-05-15 03:41:20,033-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/error=bool:'True'
2018-05-15 03:41:20,033-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV BASE/exceptionInfo=list:'[(, 
RuntimeError('SIG2',), )]'
2018-05-15 03:41:20,038-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:41:20,056-0400 DEBUG otopi.context context.dumpEnvironment:869 
ENV OVESETUP_CORE/failOnDulicatedConstant=bool:'False'
2018-05-15 03:41:20,069-0400 DEBUG otopi.context 

[ovirt-users] Re: vGPU setup guide

2018-05-15 Thread Callum Smith
OK I guess it was literally just a breath away:
https://blogs.nvidia.com/blog/2018/05/15/red-hat-virtualization-vgpu-support/

So based on it now being actually supported, is this guide still relevant?

Regards,
Callum

--

Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. cal...@well.ox.ac.uk

On 14 May 2018, at 21:48, Don Dupuis 
> wrote:

No, if you look at support matrix, there is no rhev/ovirt. RHEL KVM only 
supports pass through, not vGPU!! That driver only support 1 to 1 pass through, 
no vGPU profiles. I hope it get released soon as when RHEV 4.2 gets released.

Don

On Mon, May 14, 2018 at 3:33 PM, Callum Smith 
> wrote:
That should be fine then, because they have done, right?

https://docs.nvidia.com/grid/6.0/product-support-matrix/

And inside my product manager for NVIDIA i can download "NVIDIA vGPU for RHEL 
KVM" which comes with the hypervisor driver.

Regards,
Callum

--

Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. cal...@well.ox.ac.uk

> On 14 May 2018, at 21:19, Don Dupuis 
> > wrote:
>
> Nvidia vGPU support won't work until Nvidia releases hypervisor drivers for 
> RHEV/oVirt.
>
> Don
>
> On Mon, May 14, 2018 at 3:08 PM, Callum Smith 
> > wrote:
> Dear All,
>
> IS this the most current and useful example of implenting vGPUs in oVirt? I 
> had understood that 4.2 had NVIDIA GRID support as a flagship feature, but 
> this appears to be 4.1.4? It seems a very reasonable and decent guide, just 
> don't want to go down this route if there's alternatives now available in 
> 4.2.x.
>
> https://mpolednik.github.io/2017/09/13/vgpu-in-ovirt/
>
> Regards,
> Callum
>
> --
>
> Callum Smith
> Research Computing Core
> Wellcome Trust Centre for Human Genetics
> University of Oxford
> e. cal...@well.ox.ac.uk
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to 
> users-le...@ovirt.org
>
>



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Hosted Engine Setup error (oVirt v4.2.3)

2018-05-15 Thread ovirt

Engine network config error

Following this blog post: 
https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/


I get an error saying the hosted engine setup is "trying" to use vibr0 
(192.168.xxx.x) even though I have the bridge interface set to "eno1"


Regardless of whether the Edit Hosts File is checked or unchecked, it 
overwrites my engine IP entry from 10.50.235.x to 192.168.xxx.x


The same thing happens whether I set the engine IP to Static or DHCP (I 
don't have DNS, I'm using static entries in /etc/hosts).


Any ideas it "insists" on using "vibr0" instead of "eno1"?

**also posted this on IRC
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] data flow for statistics for hosts and vms

2018-05-15 Thread Peter Hudec
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

I'm trying to understand the flow how the stats from the VM and HOSTS
are imported  into the engine.

On each host there is VDSM. Using the vsdm-client I'm able to get HOST
and VM stats

vdsm-client Host getAllVmStats
vdsm-client Host getStats

Are the hosts pushing the data to the engine or vice versa? On the
engine side there is java based vdsm-jsonrpc client so I guess the
data are pulled from the hosts to the engine database.

And of course the question is which component on engine side is
getting/requesting the data.

I'm trying to get the stat less painless as query the API.
Maybe the engine/dwh database could help but the scheme could change
between releases, isn't it. or the engine DB.

regards
Peter

- -- 
*Peter Hudec*
Infraštruktúrny architekt
phu...@cnc.sk 

*CNC, a.s.*
Borská 6, 841 04 Bratislava
Recepcia: +421 2  35 000 100

Mobil:+421 905 997 203
*www.cnc.sk* 

-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEqSUbhuEwhryifNeVQnvVWOJ35BAFAlr63zYACgkQQnvVWOJ3
5BC1BxAApSg29lv1KohiVeJNsFdF4i77e2H77drLe9Ffq1kg/+VESh6+ggeZIH1U
W5e06PJeifoZbtWL2enaJsml9JZM/3kvqU5TC93sR8chvhfRo3GXS50tWCEe5b0R
UPdpjgyYipN+i3Xibl/YxHMu9fwpFIH+VY+JWn70Dz5pjvgwn71ogCcRUA6Itvom
700Pwv4yV31hWyE6qXiWOth0Rfey3+QkEXtNhRrEOHeAzpNABS5Zixjocr9c5xhz
AdLkUAGVvEQEYy/Nzyv8A8KRSY1z63Ijyv5hgQIpGkKfbXhRYrotY6EeqdJ6PYsF
tQvkjjT+kJ5SBYClK/et9X4y2ZMe+4YX7lrsZnk8V/Oj3YYw2y1XlvXcrt0pZlHA
G9QIhcn/chtBTslGOW0t4B603YRykausunc9XXq1IGKNyLHb/lrDiDh+x2dCg5vT
sGJE/gQffSNSwk5oFBlG2U9aOgWvZ8m3nVCDdGo+1a9fc/KR8toQi9axsiIMVYb/
dzbDmaYlXbKNwWdVirftTetxuwn44j2Kgv4b048ZmGwTvwc79UuoS8H9hv9LlBM6
7E7iqaGfKbExg0/EMZurd90P62zn8nIGnYEJUrxVU3aJ4E2gWflfSelYMymNtE+u
aZAMxvK6gILQP2n45AVkKOTacQwX07QCzlvJ2sL4gNzQQ+UL52c=
=RNAH
-END PGP SIGNATURE-
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ovirt 4.2 failed deploy

2018-05-15 Thread Alex K
Hi Philip,

I finally was not able to complete it.
The ovirt ha agent at host was not starting for some reason.
It could be because I ran a hosted-engine-cleanup earlier.
So I need to repeat from scratch to be able to reproduce/verify.

Alex



On Tue, May 15, 2018 at 2:48 PM, Phillip Bailey  wrote:

> Alex,
>
> I'm glad to hear you were able to get everything running! Please let us
> know if you have any issues going forward.
>
> Best regards,
>
> -Phillip Bailey
>
> On Tue, May 15, 2018 at 4:59 AM, Alex K  wrote:
>
>> I overcame this with:
>>
>> run at host:
>>
>> /usr/sbin/ovirt-hosted-engine-cleanup
>>
>> Redeployed then engine
>> engine-setup
>>
>> This time was ok.
>>
>> Thanx,
>> Alex
>>
>> On Tue, May 15, 2018 at 10:51 AM, Alex K  wrote:
>>
>>> Hi,
>>>
>>> Thanx for the feedback.
>>>
>>> *getent ahostsv4 v0.mydomain*
>>>
>>> gives:
>>>
>>> 172.16.30.10STREAM v0
>>> 172.16.30.10DGRAM
>>> 172.16.30.10RAW
>>>
>>> which means that
>>>
>>> *getent ahostsv4 v0.mydomain | grep v0.mydomain*
>>>
>>> gives null
>>>
>>> I overcame this by using the flag *--noansible* to proceed with the
>>> python way and it did succeed.
>>>
>>> Now I am stuck at engine-setup create CA step. It never finishes and I
>>> see several errors at setup log (grep -iE 'error|fail' ):
>>>
>>> 2018-05-15 03:40:03,749-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>>> 2018-05-15 03:40:03,751-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
>>> 2018-05-15 03:40:04,338-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>>> 2018-05-15 03:40:04,339-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
>>> 2018-05-15 03:40:04,532-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV OVESETUP_CORE/failOnDulicatedC
>>> onstant=bool:'False'
>>> 2018-05-15 03:40:04,809-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/postgres
>>> ExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>,
>>> 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key}
>>> required to be at most {expected}'}, {'ok':  at
>>> 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg':
>>> '{key} required to be at most {expected}'}, {'ok':  at
>>> 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
>>> at least {expected}'}, {'ok':  at 0x7ff163099488>,
>>> 'check_on_use': True, 'needeOperationalError: FATAL:  *password
>>> authentication failed for user "engine"*
>>> FATAL:  password authentication failed for user "engine"
>>> 2018-05-15 03:40:11,408-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>>> 2018-05-15 03:40:11,417-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
>>> 2018-05-15 03:40:11,441-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV OVESETUP_CORE/failOnDulicatedC
>>> onstant=bool:'False'
>>> 2018-05-15 03:40:11,457-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/postgres
>>> ExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>,
>>> 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key}
>>> required to be at most {expected}'}, {'ok':  at
>>> 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg':
>>> '{key} required to be at most {expected}'}, {'ok':  at
>>> 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
>>> at least {expected}'}, {'ok':  at 0x7ff163099488>,
>>> 'check_on_use': True, 'needed_on_create': True, 'key':
>>> 'maintenance_work_mem', 'expected': 65536, 'error_msg': '{key} required to
>>> be at least {expected}', 'useQueryForValue': True}, {'ok': >>  at 0x7ff163099500>, 'check_on_use': True, 'needed_on_create':
>>> True, 'key': 'work_mem', 'expected': 8192, 'error_msg': '{key} required to
>>> be at least {expected}', 'useQueryForValue': True})'
>>> raise RuntimeError("SIG%s" % signum)
>>> RuntimeError: SIG2
>>> raise RuntimeError("SIG%s" % signum)
>>> RuntimeError: SIG2
>>> 2018-05-15 03:41:19,888-0400 ERROR otopi.context
>>> context._executeMethod:152 *Failed to execute stage 'Misc
>>> configuration': SIG2*
>>> 2018-05-15 03:41:19,993-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
>>> 2018-05-15 03:41:19,993-0400 DEBUG otopi.context
>>> context.dumpEnvironment:869 ENV 

[ovirt-users] Re: Private VLANs

2018-05-15 Thread Luca 'remix_tj' Lorenzetto
On Tue, May 15, 2018 at 2:45 AM, Colin Coe  wrote:
> Hi all
>
> We running RHEV 4.1.10 on HPE Blade servers using Virtual Connect which talk
> to Cisco switches.
>
> I want to implement private VLANs, does the combination of oVirt + Cisco
> switches + HPE Virtual Connect work with private VLANs?
>
> To be clear, I want to have a couple of logical networks (i.e. VLANs) where
> the nodes in that VLAN cannot talk directly but must go through the
> router/firewall.


Hello Colin,

do you mean hosts inside the same vlan cannot talk to each other
directly? Do you want to apply some security policies directly on
single nodes (microsegmentation)? Or you want that communication
between hosts placed in these two different vlans goes through
firewall?

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: Ovirt host becomes non_operational

2018-05-15 Thread Simone Tiraboschi
On Tue, May 15, 2018 at 1:38 PM, <03ce...@gmail.com> wrote:

> I am setting up self-hosted-ovirt-engine (4.2) on centos7.4.
>
> While running hosted-engine --deploy script, it fails at "Check host
> status" with 'host has been set in non_operational status' error.
>
> logs on engine vm at /var/log/ovirt-engine/host-deploy shows ansible tasl
> for "add host" ran successfully, but yet after that the host becomes
> non_operational!
>
> Where can i find more information on this error?
>

Hi,
I'd suggest to check /var/log/ovirt-engine/engine.log (on the engine VM)
about  HostSetupNetworksVDS stuff and vdsm.log an supervdsm.log on host
side.


>
> Thank you.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives:
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


[ovirt-users] Re: ovirt 4.2 failed deploy

2018-05-15 Thread Phillip Bailey
Alex,

I'm glad to hear you were able to get everything running! Please let us
know if you have any issues going forward.

Best regards,

-Phillip Bailey

On Tue, May 15, 2018 at 4:59 AM, Alex K  wrote:

> I overcame this with:
>
> run at host:
>
> /usr/sbin/ovirt-hosted-engine-cleanup
>
> Redeployed then engine
> engine-setup
>
> This time was ok.
>
> Thanx,
> Alex
>
> On Tue, May 15, 2018 at 10:51 AM, Alex K  wrote:
>
>> Hi,
>>
>> Thanx for the feedback.
>>
>> *getent ahostsv4 v0.mydomain*
>>
>> gives:
>>
>> 172.16.30.10STREAM v0
>> 172.16.30.10DGRAM
>> 172.16.30.10RAW
>>
>> which means that
>>
>> *getent ahostsv4 v0.mydomain | grep v0.mydomain*
>>
>> gives null
>>
>> I overcame this by using the flag *--noansible* to proceed with the
>> python way and it did succeed.
>>
>> Now I am stuck at engine-setup create CA step. It never finishes and I
>> see several errors at setup log (grep -iE 'error|fail' ):
>>
>> 2018-05-15 03:40:03,749-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>> 2018-05-15 03:40:03,751-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
>> 2018-05-15 03:40:04,338-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>> 2018-05-15 03:40:04,339-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
>> 2018-05-15 03:40:04,532-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV OVESETUP_CORE/failOnDulicatedC
>> onstant=bool:'False'
>> 2018-05-15 03:40:04,809-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/postgres
>> ExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>,
>> 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key}
>> required to be at most {expected}'}, {'ok':  at
>> 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg':
>> '{key} required to be at most {expected}'}, {'ok':  at
>> 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
>> at least {expected}'}, {'ok':  at 0x7ff163099488>,
>> 'check_on_use': True, 'needeOperationalError: FATAL:  *password
>> authentication failed for user "engine"*
>> FATAL:  password authentication failed for user "engine"
>> 2018-05-15 03:40:11,408-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
>> 2018-05-15 03:40:11,417-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
>> 2018-05-15 03:40:11,441-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV OVESETUP_CORE/failOnDulicatedC
>> onstant=bool:'False'
>> 2018-05-15 03:40:11,457-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/postgres
>> ExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>,
>> 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key}
>> required to be at most {expected}'}, {'ok':  at
>> 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg':
>> '{key} required to be at most {expected}'}, {'ok':  at
>> 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
>> at least {expected}'}, {'ok':  at 0x7ff163099488>,
>> 'check_on_use': True, 'needed_on_create': True, 'key':
>> 'maintenance_work_mem', 'expected': 65536, 'error_msg': '{key} required to
>> be at least {expected}', 'useQueryForValue': True}, {'ok': >  at 0x7ff163099500>, 'check_on_use': True, 'needed_on_create':
>> True, 'key': 'work_mem', 'expected': 8192, 'error_msg': '{key} required to
>> be at least {expected}', 'useQueryForValue': True})'
>> raise RuntimeError("SIG%s" % signum)
>> RuntimeError: SIG2
>> raise RuntimeError("SIG%s" % signum)
>> RuntimeError: SIG2
>> 2018-05-15 03:41:19,888-0400 ERROR otopi.context
>> context._executeMethod:152 *Failed to execute stage 'Misc
>> configuration': SIG2*
>> 2018-05-15 03:41:19,993-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
>> 2018-05-15 03:41:19,993-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(> 'exceptions.RuntimeError'>, RuntimeError('SIG2',), > 0x7ff161de9560>)]'
>> 2018-05-15 03:41:20,033-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
>> 2018-05-15 03:41:20,033-0400 DEBUG otopi.context
>> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(> 'exceptions.RuntimeError'>, RuntimeError('SIG2',), > 0x7ff161de9560>)]'
>> 2018-05-15 03:41:20,038-0400 DEBUG otopi.context
>> 

[ovirt-users] Ovirt host becomes non_operational

2018-05-15 Thread 03ce007
I am setting up self-hosted-ovirt-engine (4.2) on centos7.4.

While running hosted-engine --deploy script, it fails at "Check host status" 
with 'host has been set in non_operational status' error.

logs on engine vm at /var/log/ovirt-engine/host-deploy shows ansible tasl for 
"add host" ran successfully, but yet after that the host becomes 
non_operational!

Where can i find more information on this error?

Thank you.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Private VLANs

2018-05-15 Thread Dominik Holler
On Tue, 15 May 2018 08:45:05 +0800
Colin Coe  wrote:

> Hi all
> 
> We running RHEV 4.1.10 on HPE Blade servers using Virtual Connect
> which talk to Cisco switches.
> 
> I want to implement private VLANs, does the combination of oVirt +
> Cisco switches + HPE Virtual Connect work with private VLANs?
> 
> To be clear, I want to have a couple of logical networks (i.e. VLANs)
> where the nodes in that VLAN cannot talk directly but must go through
> the router/firewall.
> 


What is a 'node' in you scenario?
Is this a oVirt host or a VM?
May I ask what would you like to achieve?
Does
https://bugzilla.redhat.com/show_bug.cgi?id=1009608
reflect what you want to achieve?

Unfortunately private VLANs are not directly supported by oVirt,
but there is the vdsm_hook isolatedprivatevlan in
https://github.com/oVirt/vdsm/tree/master/vdsm_hooks/isolatedprivatevlan
which might solve your issue.


> Thanks
> 
> CC
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] [ANN] introducing ovirt-openshift-extensions

2018-05-15 Thread Roy Golan
Hi all,

Running Openshift on oVirt seems more and more attractive and is starting
to get attention.
It is very easy to do so today without needing any special configuration,
however, to tighten the integration and to take advantage of the underlying
infra
provider(ovirt) we can do better. For example, oVirt can connect various
storage
providers, and serve disk space to containers[1][2]. Also, Openshift can
ask oVirt
for VMs and deploy them as master or application nodes for its usage,
without
making the administrator doing all that manually.

This project[1] is the home for the ovirt-flexvolume-driver and
ovirt-provisioner[3]
and merging ovirt-cloudprovider[4] is a work in progress.

The code under this repository is work-in-progress and moving quickly,
however,
it has automation (stdci v2 :)) and is working and does at least what you
observe in
the demo videos. Yet I highly appreciate if any of you that will try will
provide feedback
be that #ovirt channel/ mailing list or to report bugs directly in the Github
project page 

[1] https://github.com/oVirt/ovirt-openshift-extensions
[2] https://ovirt.org/blog/2018/02/your-container-volumes-served-by-ovirt/
[3] https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/
[4] https://github.com/rgolangh/ovirt-k8s-cloudprovider


Thanks,
Roy
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: VM interface bonding (LACP)

2018-05-15 Thread Yaniv Kaul
On Mon, May 14, 2018 at 11:25 PM, Christopher Cox 
wrote:

> In the ideal case, what you'd have:
>
>| Single virtio virtual interface
>|
>  VM  Host  Switch stack
>  |
>  |--- 4x 1Gbit interfaces bonded over LACP
>
> The change: virtio instead of "1 Gbit"
>
> You can't get blood from a stone, that is, you can't manufacture bandwidth
> that isn't there.  If you need more than gigabit speed, you need something
> like 10Gbit.  Realize that usually, we're talking about a system created to
> run more than one VM.  If just one, you'll do better with dedicated
> hardware.  If more than one VM, then there sharing going on, though you
> might be able to use QoS (either in oVirt or outside). Even so, if just one
> VM on 10Gbit, you won't necessarily get full 10Gbit out of virtio.  But at
> the same  time bonding should help in the case of multiple VMs.
>

Jumbo frames may help in some workloads and give ~5% boost or so.
Y.


>
> Now, back to the suggestion at hand.  Multiple virtual NICs.  If the
> logical networks presented via oVirt are such that each (however many)
> logical network has it's own "pipe", then defining a vNIC on each of those
> networks gets you the same sort of "gain" with respect to bonding.  That
> is, no magic bandwidth increase for a particular connection, but more pipes
> available for multiple connections (essentially what you'd expect).
>
> Obviously up to you how you want to do this.  I think you might do better
> to consider a better underlying infrastructure to oVirt rather than trying
> to bond vNICs.  Pretty sure I'm right about that.  Would think the idea of
> bonding at the VM level might be best for simulating something rather than
> something you do because it's right/best.
>
>
>
> On 05/14/2018 03:03 PM, Doug Ingham wrote:
>
>> On 14 May 2018 at 15:35, Juan Pablo  pablo.localh...@gmail.com>> wrote:
>>
>> so you have lacp on your host, and you want lacp also on your vm...
>> somehow doesn't sounds correct.
>> there are several lacp modes. which one are you using on the host?
>>
>>
>>   Correct!
>>
>>   | Single 1Gbit virtual interface
>>   |
>> VM  Host  Switch stack
>> |
>> |--- 4x 1Gbit interfaces bonded over LACP
>>
>> The traffic for all of the VMs is distributed across the host's 4 bonded
>> links, however each VM is limited to the 1Gbit of its own virtual
>> interface. In the case of my proxy, all web traffic is routed through it,
>> so its single Gbit interface has become a bottleneck.
>>
>> To increase the total bandwidth available to my VM, I presume I will need
>> to add multiple Gbit VIFs & bridge them with a bonding mode.
>> Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode
>> 4) if possible.
>>
>>
>> 2018-05-14 16:20 GMT-03:00 Doug Ingham:
>>
>> On 14 May 2018 at 15:03, Vinícius Ferrão wrote:
>>
>> You should use better hashing algorithms for LACP.
>>
>> Take a look at this explanation:
>> https://www.ibm.com/developerworks/community/blogs/
>> storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en
>> > storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en>
>>
>> In general only L2 hashing is made, you can achieve better
>> throughput with L3 and multiple IPs, or with L4 (ports).
>>
>> Your switch should support those features too, if you’re
>> using one.
>>
>> V.
>>
>>
>> The problem isn't the LACP connection between the host & the
>> switch, but setting up LACP between the VM & the host. For
>> reasons of stability, my 4.1 cluster's switch type is currently
>> "Linux Bridge", not "OVS". Ergo my question, is LACP on the VM
>> possible with that, or will I have to use ALB?
>>
>> Regards,
>>   Doug
>>
>>
>>
>> On 14 May 2018, at 15:16, Doug Ingham wrote:
>>
>> Hi All,
>>   My hosts have all of their interfaces bonded via LACP to
>> maximise throughput, however the VMs are still limited to
>> Gbit virtual interfaces. Is there a way to configure my VMs
>> to take full advantage of the bonded physical interfaces?
>>
>> One way might be adding several VIFs to each VM & using ALB
>> bonding, however I'd rather use LACP if possible...
>>
>> Cheers,
>> --
>> Doug
>>
>>
>> -- Doug
>>
>>
>>
>>
>> --
>> Doug
>>
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>>
>> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an 

[ovirt-users] Host needs to be reinstalled message

2018-05-15 Thread Gianluca Cecchi
Hello,
on a test environment in 4.1 when selecting an host, I get in the below
pane the exclamation mark and aside it the phrase:

Host needs to be reinstalled as important configuration changes were
applied on it

Where to get more information about what it thanks has changed?
Could it be a change in bonding mode to generate this kind of message?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: What is "Active VM" snapshot

2018-05-15 Thread Arik Hadas
On Tue, May 15, 2018 at 12:21 PM,  wrote:

> Hi!
>
> Can anybody explain what is "Active VM" snapshot that present on each VM
> and does it contain VM memory snapshot?
>

"Active VM" snapshot is an entity that:
(1) Top-level volume of each disk image that the VM uses when it starts is
attached to.
(2) Memory that needs to be restored when the VM starts is attached to.

So yeah, it may contain memory - this happens when the VM is either
suspended or a snapshot with memory is restored.


>
> I need it for PCI assessment and did not found answer in documentation or
> any other public sources.
>
> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives:
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Ovirt Self Hosted engine deploy fails

2018-05-15 Thread Sumit Bharadia
thanks, i get below message

*Failed to start ovn-controller.service: Unit not found.*

isn't it included within appliance we get for self-hosted-ovirt-engine?


On 14 May 2018 at 12:53, Simone Tiraboschi  wrote:

>
>
> On Mon, May 14, 2018 at 11:52 AM, Sumit Bharadia <03ce...@gmail.com>
> wrote:
>
>> the output of:  journalctl -xe -u ovn-controller
>> -- No entries --
>>
>> what is the command to manual restart of this service?
>>
>
> systemctl start ovn-controller
>
>
>>
>> Thank you.
>>
>> On 14 May 2018 at 09:12, Simone Tiraboschi  wrote:
>>
>>>
>>>
>>> On Mon, May 14, 2018 at 10:03 AM, Sumit Bharadia <03ce...@gmail.com>
>>> wrote:
>>>
 attached from engine VM.

>>>
>>>
>>> The issue is here:
>>>
>>> 2018-05-14 08:04:50,596 p=26024 u=ovirt |  TASK
>>> [ovirt-provider-ovn-driver : Ensure ovn-controller is started] 
>>> 2018-05-14 08:04:51,991 p=26024 u=ovirt |  fatal: [ovirt]: FAILED! => {
>>> "changed": false
>>> }
>>>
>>> MSG:
>>>
>>> Unable to start service ovn-controller: A dependency job for
>>> ovn-controller.service failed. See 'journalctl -xe' for details.
>>>
>>>
>>> Can you please double check the output of:
>>>   journalctl -xe -u ovn-controller
>>>
>>>
>>>

 Thank you.

 On 14 May 2018 at 08:46, Simone Tiraboschi  wrote:

> Hi,
> can you please attach host-deploy logs?
> You can find them under /var/log/ovirt-engine/host-deploy on the
> engine VM (you can reach it with ssh from the host you tried to deploy).
>
> On Mon, May 14, 2018 at 9:27 AM, <03ce...@gmail.com> wrote:
>
>> I am trying to setup self hosted ovirt engine (4.2) on centos 7.4,
>> but the setup fails after 'Add host' task on 'wait for the host to be 
>> up'.
>> The log doesn't seem to give clear indication where the issue might be. 
>> But
>> I got below when ssh manually into ovirt appliance where the 
>> engine-deploy
>> runs and I see the host status as'install_failed'.
>>
>> Where the issue might be, and where can i find detailed log on
>> failure?
>>
>> "ovirt_hosts": [
>> {
>> "address": "ovirt",
>> "affinity_labels": [],
>> "auto_numa_status": "unknown",
>> "certificate": {
>> "organization": "ovirt",
>> "subject": "O=ovirt,CN=ovirt"
>> },
>> "cluster": {
>> "href": "/ovirt-engine/api/clusters/ba
>> 170b8e-5744-11e8-8676-00163e3c9a32",
>> "id": "ba170b8e-5744-11e8-8676-00163e3c9a32"
>> },
>> "comment": "",
>> "cpu": {
>> "speed": 0.0,
>> "topology": {}
>> },
>> "device_passthrough": {
>> "enabled": false
>> },
>> "devices": [],
>> "external_network_provider_configurations": [],
>> "external_status": "ok",
>> "hardware_information": {
>> "supported_rng_sources": []
>> },
>> "hooks": [],
>> "href": "/ovirt-engine/api/hosts/aad0f
>> e84-2a9b-446d-ac02-82a8f6eb2a3c",
>> "id": "aad0fe84-2a9b-446d-ac02-82a8f6eb2a3c",
>> "katello_errata": [],
>> "kdump_status": "unknown",
>> "ksm": {
>> "enabled": false
>> },
>> "max_scheduling_memory": 0,
>> "memory": 0,
>> "name": "ovirt",
>> "network_attachments": [],
>> "nics": [],
>> "numa_nodes": [],
>> "numa_supported": false,
>> "os": {
>> "custom_kernel_cmdline": ""
>> },
>> "permissions": [],
>> "port": 54321,
>> "power_management": {
>> "automatic_pm_enabled": true,
>> "enabled": false,
>> "kdump_detection": true,
>> "pm_proxies": []
>> },
>> "protocol": "stomp",
>> "se_linux": {},
>> "spm": {
>> "priority": 5,
>> "status": "none"
>> },
>> "ssh": {
>> "fingerprint": "SHA256:o98ZOygBK0jcfY+l5nfi0E
>> GV9v3A4zjclG9d+C3U0WA",
>> "port": 22
>> },
>> "statistics": [],
>> "status": "install_failed",
>> "storage_connection_extensions": [],
>> "summary": {
>> "total": 0
>> },
>> "tags": [],
>> "transparent_huge_pages": {

[ovirt-users] What is "Active VM" snapshot

2018-05-15 Thread nikita . a . ogurtsov
Hi!

Can anybody explain what is "Active VM" snapshot that present on each VM and 
does it contain VM memory snapshot?

I need it for PCI assessment and did not found answer in documentation or any 
other public sources.

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Failed to upgrade from 4.1 to 4.2 - Postgre version required

2018-05-15 Thread Yedidyah Bar David
Hi everyone,

I see that you are already deep inside trying to workaround your problem,
so will not interfere. But do update if you get stuck!

I'd just like to add that the failure you ran into indeed seems exactly
like:

https://bugzilla.redhat.com/show_bug.cgi?id=1528371

So should not happen when upgrading to 4.2.3 or later.

If it does happen, to you or others, please share also engine-setup logs,
so that we can see if the fix there indeed worked - which was to pass to
the upgrade script locale options with values taken from the existing
database.

In any case, we can't fix problems if we can't reproduce them, and in that
bug we fixed what we managed to reproduce. But there are many different
relevant options for locales and encodings, both OS-level, PG-instance
level, and specific-database level, so it's quite likely we missed some
cases.

Best regards,

On Fri, Apr 27, 2018 at 12:14 PM, Aziz  wrote:
> Hi Marcelo,
>
> I already upgraded to version 4.2 in a new installation, but I couldn't
> restore my backup, so I will do the config from scratch
>
> engine-backup --mode=restore --scope=all --file=pgbackup --log=restore_log
> --restore-permissions
> Preparing to restore:
> - Unpacking file 'pgbackup'
> FATAL: Backup was created by version '4.1' and can not be restored using the
> installed version 4.2
>
>
> Thank you all for your help.
>
> BR
>
> On Thu, Apr 26, 2018 at 3:50 PM, Marcelo Leandro 
> wrote:
>>
>> Hello,
>>
>> I try only did an clean install in another server with steps:
>>
>> my host is a Centos 7 with LANG = LANG= english
>>
>> commmand:
>>
>> #locale
>> LANG=en_US.UTF-8
>> LC_CTYPE="en_US.UTF-8"
>> LC_NUMERIC="en_US.UTF-8"
>> LC_TIME="en_US.UTF-8"
>> LC_COLLATE="en_US.UTF-8"
>> LC_MONETARY="en_US.UTF-8"
>> LC_MESSAGES="en_US.UTF-8"
>> LC_PAPER="en_US.UTF-8"
>> LC_NAME="en_US.UTF-8"
>> LC_ADDRESS="en_US.UTF-8"
>> LC_TELEPHONE="en_US.UTF-8"
>> LC_MEASUREMENT="en_US.UTF-8"
>> LC_IDENTIFICATION="en_US.UTF-8"
>> LC_ALL=
>>
>>
>> 1-yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
>> 2-yum update
>> 3-yum install ovirt-engine -y
>> 4-engine-setup --accept-defaults
>> 5-engine-cleanup
>> 6- engine-backup --mode=restore --no-restore-permissions --provision-db
>> --provision-dwh-db --provision-reports-db --file=engine-backup.tar.gz
>> --log=engine-backup-restore.log
>>
>> and upgrade now:
>>
>> 1-yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
>> 2-yum update "ovirt-*-setup*"
>> 3- engine-setup
>>
>> It`s works for me.
>>
>>
>>
>> 2018-04-26 9:12 GMT-03:00 Marcelo Leandro :
>>>
>>> Do you have a full backup ?
>>>
>>> If yes, I think if possible  better configure a new Server and restore.
>>>
>>> Em 26 de abr de 2018 09:00, "Aziz"  escreveu:
>>>
>>> Thanks Marcelo for the feedback,
>>>
>>> In my case some of the components are already upgraded to 4.2, including
>>> cleanup engine, therefore I got the following error :
>>>
>>> engine-cleanup
>>> [ INFO  ] Stage: Initializing
>>> [ INFO  ] Stage: Environment setup
>>>   Configuration files:
>>> ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
>>> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
>>> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
>>>   Log file:
>>> /var/log/ovirt-engine/setup/ovirt-engine-remove-2018042613-m68ygc.log
>>>   Version: otopi-1.7.7 (otopi-1.7.7-1.el7.centos)
>>> [ ERROR ] Cleanup utility and installed version mismatch
>>>   Please use a version of cleanup utility that matches the engine
>>> installed version (now engine-cleanup 4.2.2.6, engine 4.1.9.1)
>>> [ ERROR ] Failed to execute stage 'Environment setup': Cleanup utility
>>> version mismatch
>>> [ INFO  ] Stage: Clean up
>>>   Log file is located at
>>> /var/log/ovirt-engine/setup/ovirt-engine-remove-2018042613-m68ygc.log
>>> [ INFO  ] Generating answer file
>>> '/var/lib/ovirt-engine/setup/answers/20180426135556-cleanup.conf'
>>>
>>> [ INFO  ] Stage: Pre-termination
>>> [ INFO  ] Stage: Termination
>>> [ ERROR ] Execution of cleanup failed
>>>
>>>
>>> Is there a way to downgrade ?
>>>
>>>
>>> Thanks
>>>
>>> On Thu, Apr 26, 2018 at 12:49 PM, Marcelo Leandro 
>>> wrote:

 I am had the same problem, is a problem in database structure. In my lab
 I followed this steps:

 FOLLOW THIS STEP IN A LAB FIRST:

 Full backup before the upgrade engine:

 engine-backup --scope=all --mode=backup --file=file_name
 --log=log_file_name

 after clean you engine config:

 engine-cleanup


 Change structure template 1

 su - postgres

 psql -U postgres

 postgres=# update pg_database set datallowconn = TRUE where datname =
 'template0';
 UPDATE 1
 postgres=# \c template0
 You are now connected to database "template0".
 template0=# update 

[ovirt-users] Re: ovirt 4.2 failed deploy

2018-05-15 Thread Alex K
I overcame this with:

run at host:

/usr/sbin/ovirt-hosted-engine-cleanup

Redeployed then engine
engine-setup

This time was ok.

Thanx,
Alex

On Tue, May 15, 2018 at 10:51 AM, Alex K  wrote:

> Hi,
>
> Thanx for the feedback.
>
> *getent ahostsv4 v0.mydomain*
>
> gives:
>
> 172.16.30.10STREAM v0
> 172.16.30.10DGRAM
> 172.16.30.10RAW
>
> which means that
>
> *getent ahostsv4 v0.mydomain | grep v0.mydomain*
>
> gives null
>
> I overcame this by using the flag *--noansible* to proceed with the
> python way and it did succeed.
>
> Now I am stuck at engine-setup create CA step. It never finishes and I see
> several errors at setup log (grep -iE 'error|fail' ):
>
> 2018-05-15 03:40:03,749-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
> 2018-05-15 03:40:03,751-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
> 2018-05-15 03:40:04,338-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
> 2018-05-15 03:40:04,339-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
> 2018-05-15 03:40:04,532-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVESETUP_CORE/
> failOnDulicatedConstant=bool:'False'
> 2018-05-15 03:40:04,809-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/
> postgresExtraConfigItems=tuple:'({'ok':  at
> 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create': True, 'key':
> 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key}
> required to be at most {expected}'}, {'ok':  at
> 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
> 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg': '{key}
> required to be at most {expected}'}, {'ok':  at
> 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
> 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
> at least {expected}'}, {'ok':  at 0x7ff163099488>,
> 'check_on_use': True, 'needeOperationalError: FATAL:  *password
> authentication failed for user "engine"*
> FATAL:  password authentication failed for user "engine"
> 2018-05-15 03:40:11,408-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'False'
> 2018-05-15 03:40:11,417-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
> 2018-05-15 03:40:11,441-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVESETUP_CORE/
> failOnDulicatedConstant=bool:'False'
> 2018-05-15 03:40:11,457-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/
> postgresExtraConfigItems=tuple:'({'ok':  at
> 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create': True, 'key':
> 'autovacuum_vacuum_scale_factor', 'expected': 0.01, 'error_msg': '{key}
> required to be at most {expected}'}, {'ok':  at
> 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create': True, 'key':
> 'autovacuum_analyze_scale_factor', 'expected': 0.075, 'error_msg': '{key}
> required to be at most {expected}'}, {'ok':  at
> 0x7ff163099410>, 'check_on_use': True, 'needed_on_create': True, 'key':
> 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key} required to be
> at least {expected}'}, {'ok':  at 0x7ff163099488>,
> 'check_on_use': True, 'needed_on_create': True, 'key':
> 'maintenance_work_mem', 'expected': 65536, 'error_msg': '{key} required to
> be at least {expected}', 'useQueryForValue': True}, {'ok':   at 0x7ff163099500>, 'check_on_use': True, 'needed_on_create':
> True, 'key': 'work_mem', 'expected': 8192, 'error_msg': '{key} required to
> be at least {expected}', 'useQueryForValue': True})'
> raise RuntimeError("SIG%s" % signum)
> RuntimeError: SIG2
> raise RuntimeError("SIG%s" % signum)
> RuntimeError: SIG2
> 2018-05-15 03:41:19,888-0400 ERROR otopi.context
> context._executeMethod:152 *Failed to execute stage 'Misc configuration':
> SIG2*
> 2018-05-15 03:41:19,993-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
> 2018-05-15 03:41:19,993-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[( 'exceptions.RuntimeError'>, RuntimeError('SIG2',),  0x7ff161de9560>)]'
> 2018-05-15 03:41:20,033-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/error=bool:'True'
> 2018-05-15 03:41:20,033-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[( 'exceptions.RuntimeError'>, RuntimeError('SIG2',),  0x7ff161de9560>)]'
> 2018-05-15 03:41:20,038-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
> 2018-05-15 03:41:20,056-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVESETUP_CORE/
> failOnDulicatedConstant=bool:'False'
> 2018-05-15 03:41:20,069-0400 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVESETUP_PROVISIONING/
> 

[ovirt-users] Export Domain Lock File

2018-05-15 Thread Nicholas Vaughan
​Hi,

Is there a lock file on the Export Domain which stops it being mounted to a
2nd instance of oVirt?  We have a replicated Export Domain in a separate
location that we would like to mount to a backup instance of oVirt for DR
purposes.

Thanks in advance.
Nick
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: VM interface bonding (LACP)

2018-05-15 Thread Christopher Cox

In the ideal case, what you'd have:

   | Single virtio virtual interface
   |
 VM  Host  Switch stack
 |
 |--- 4x 1Gbit interfaces bonded over LACP

The change: virtio instead of "1 Gbit"

You can't get blood from a stone, that is, you can't manufacture 
bandwidth that isn't there.  If you need more than gigabit speed, you 
need something like 10Gbit.  Realize that usually, we're talking about a 
system created to run more than one VM.  If just one, you'll do better 
with dedicated hardware.  If more than one VM, then there sharing going 
on, though you might be able to use QoS (either in oVirt or outside). 
Even so, if just one VM on 10Gbit, you won't necessarily get full 10Gbit 
out of virtio.  But at the same  time bonding should help in the case of 
multiple VMs.


Now, back to the suggestion at hand.  Multiple virtual NICs.  If the 
logical networks presented via oVirt are such that each (however many) 
logical network has it's own "pipe", then defining a vNIC on each of 
those networks gets you the same sort of "gain" with respect to bonding. 
 That is, no magic bandwidth increase for a particular connection, but 
more pipes available for multiple connections (essentially what you'd 
expect).


Obviously up to you how you want to do this.  I think you might do 
better to consider a better underlying infrastructure to oVirt rather 
than trying to bond vNICs.  Pretty sure I'm right about that.  Would 
think the idea of bonding at the VM level might be best for simulating 
something rather than something you do because it's right/best.




On 05/14/2018 03:03 PM, Doug Ingham wrote:
On 14 May 2018 at 15:35, Juan Pablo > wrote:


so you have lacp on your host, and you want lacp also on your vm...
somehow doesn't sounds correct.
there are several lacp modes. which one are you using on the host?


  Correct!

  | Single 1Gbit virtual interface
  |
VM  Host  Switch stack
    |
        |--- 4x 1Gbit interfaces bonded over LACP

The traffic for all of the VMs is distributed across the host's 4 bonded 
links, however each VM is limited to the 1Gbit of its own virtual 
interface. In the case of my proxy, all web traffic is routed through 
it, so its single Gbit interface has become a bottleneck.


To increase the total bandwidth available to my VM, I presume I will 
need to add multiple Gbit VIFs & bridge them with a bonding mode.
Balance-alb (mode 6) is one option, however I'd prefer to use LACP (mode 
4) if possible.



2018-05-14 16:20 GMT-03:00 Doug Ingham:

On 14 May 2018 at 15:03, Vinícius Ferrão wrote:

You should use better hashing algorithms for LACP.

Take a look at this explanation:

https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Enhancing_IP_Network_Performance_with_LACP?lang=en



In general only L2 hashing is made, you can achieve better
throughput with L3 and multiple IPs, or with L4 (ports).

Your switch should support those features too, if you’re
using one.

V.


The problem isn't the LACP connection between the host & the
switch, but setting up LACP between the VM & the host. For
reasons of stability, my 4.1 cluster's switch type is currently
"Linux Bridge", not "OVS". Ergo my question, is LACP on the VM
possible with that, or will I have to use ALB?

Regards,
  Doug



On 14 May 2018, at 15:16, Doug Ingham wrote:

Hi All,
  My hosts have all of their interfaces bonded via LACP to
maximise throughput, however the VMs are still limited to
Gbit virtual interfaces. Is there a way to configure my VMs
to take full advantage of the bonded physical interfaces?

One way might be adding several VIFs to each VM & using ALB
bonding, however I'd rather use LACP if possible...

Cheers,
--
Doug


-- 
Doug





--
Doug


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Custom Intel AMT fencing question

2018-05-15 Thread Martin Perina
On Mon, May 14, 2018 at 8:13 PM, Shawn Southern  wrote:

> I'm now using Intel AMT and the wsmancli package to reboot/power off/power
> on my entry level systems... but now I want oVirt to use this for fencing.
>
> I created 3 xml files: powercycle.xml (uses PowerState 10), poweron.xml
> (uses PowerState 2) and poweroff.xml (uses PowerState 8).  Here is the
> poweroff.xml file:
> http://schemas.dmtf.
> org/wbem/wscim/1/cim-schema/2/CIM_PowerManagementService">
>   8
>   http://schemas.xmlsoap.org/ws/2004/08/
> addressing"
> xmlns:wsman="http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd;>
> http://schemas.xmlsoap.org/ws/2004/08/
> addressing/role/anonymous
> 
>   http://schemas.dmtf.org/wbem/wscim/1/
> cim-schema/2/CIM_ComputerSystem
>   
> CIM_
> ComputerSystem
> ManagedSystem
>   
> 
>   
> 
>
> I can then reboot or power on/off the server with:
> wsman invoke -a RequestPowerStateChange http://schemas.dmtf.org/wbem/
> wscim/1/cim-schema/2/CIM_PowerManagementService -h [AMT IP] -P 16992 -u
> admin -p [amt password] -J /fencing/poweron.xml  (or poweroff.xml, etc).
>
> My question is, how do I move from this to using this for fencing in oVirt?
>

​At the moment oVirt doesn't officially support AMT as fence agent. But
I've just looked that on CentOS 7 we already have fence-agents-amt-ws
package, so please try to install fence-agents-amt-ws package and test if
it's working for your server​.

If above agent is working fine, then please take a look Custom Fencing
oVirt feature [1], which should allow you to use fence_agent_amt_ws agent
in oVirt. Am I right Eli?

Regards

Martin


[1] https://www.ovirt.org/develop/developer-guide/engine/custom-fencing/


> Thanks!
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>



-- 
Martin Perina
Associate Manager, Software Engineering
Red Hat Czech s.r.o.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Gluster quorum

2018-05-15 Thread Sahina Bose
On Tue, May 15, 2018 at 1:28 PM, Demeter Tibor  wrote:

> Hi,
>
> Could you explain how can I use this patch?
>

You can use the 4.2 nightly to test it out -
http://resources.ovirt.org/pub/yum-repo/ovirt-release42-snapshot.rpm


> R,
> Tibor
>
>
> - 2018. máj.. 14., 11:18, Demeter Tibor  írta:
>
> Hi,
>
> Sorry for my question, but can you tell me please how can I use this patch?
>
> Thanks,
> Regards,
> Tibor
> - 2018. máj.. 14., 10:47, Sahina Bose  írta:
>
>
>
> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor 
> wrote:
>
>> Hi,
>>
>> Could someone help me please ? I can't finish my upgrade process.
>>
>
> https://gerrit.ovirt.org/91164 should fix the error you're facing.
>
> Can you elaborate why this is affecting the upgrade process?
>
>
>> Thanks
>> R
>> Tibor
>>
>>
>>
>> - 2018. máj.. 10., 12:51, Demeter Tibor  írta:
>>
>> Hi,
>>
>> I've attached the vdsm and supervdsm logs. But I don't have engine.log
>> here, because that is on hosted engine vm. Should I send that ?
>>
>> Thank you
>>
>> Regards,
>>
>> Tibor
>> - 2018. máj.. 10., 12:30, Sahina Bose  írta:
>>
>> There's a bug here. Can you log one attaching this engine.log and also
>> vdsm.log & supervdsm.log from n3.itsmart.cloud
>>
>> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor 
>> wrote:
>>
>>> Hi,
>>>
>>> I found this:
>>>
>>>
>>> 2018-05-10 03:24:19,096+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>>> GetGlusterVolumeAdvancedDetailsVDSCommand,
>>> return: org.ovirt.engine.core.common.businessentities.gluster.
>>> GlusterVolumeAdvancedDetails@ca97448e, log id: 347435ae
>>> 2018-05-10 03:24:19,097+02 ERROR 
>>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob]
>>> (DefaultQuartzScheduler7) [43f4eaec] Error while refreshing brick statuses
>>> for volume 'volume2' of cluster 'C6220': null
>>> 2018-05-10 03:24:19,097+02 INFO  
>>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
>>> (DefaultQuartzScheduler8) [7715ceda] Failed to acquire lock and wait lock
>>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>>> sharedLocks=''}'
>>> 2018-05-10 03:24:19,104+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>>> = n4.itsmart.cloud, VdsIdVDSCommandParametersBase:
>>> {hostId='3ddef95f-158d-407c-a7d8-49641e012755'}), log id: 6908121d
>>> 2018-05-10 03:24:19,106+02 ERROR [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,106+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand,
>>> log id: 6908121d
>>> 2018-05-10 03:24:19,107+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>>> = n1.itsmart.cloud, VdsIdVDSCommandParametersBase:
>>> {hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}), log id: 735c6a5f
>>> 2018-05-10 03:24:19,109+02 ERROR [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>>> execution failed: null
>>> 2018-05-10 03:24:19,109+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] FINISH, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand,
>>> log id: 735c6a5f
>>> 2018-05-10 03:24:19,110+02 INFO  [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] START, 
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName
>>> = n2.itsmart.cloud, VdsIdVDSCommandParametersBase:
>>> {hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}), log id: 6f9e9f58
>>> 2018-05-10 03:24:19,112+02 ERROR [org.ovirt.engine.core.
>>> vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>>> (DefaultQuartzScheduler7) [43f4eaec] Command '
>>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'})'
>>> execution failed: null
>>> 2018-05-10 

[ovirt-users] Re: Gluster quorum

2018-05-15 Thread Demeter Tibor
Hi, 

Could you explain how can I use this patch? 

R, 
Tibor 

- 2018. máj.. 14., 11:18, Demeter Tibor  írta: 

> Hi,

> Sorry for my question, but can you tell me please how can I use this patch?

> Thanks,
> Regards,
> Tibor
> - 2018. máj.. 14., 10:47, Sahina Bose  írta:

>> On Sat, May 12, 2018 at 1:14 PM, Demeter Tibor < [ 
>> mailto:tdeme...@itsmart.hu |
>> tdeme...@itsmart.hu ] > wrote:

>>> Hi,

>>> Could someone help me please ? I can't finish my upgrade process.

>> [ https://gerrit.ovirt.org/91164 | https://gerrit.ovirt.org/91164 ] should 
>> fix
>> the error you're facing.

>> Can you elaborate why this is affecting the upgrade process?

>>> Thanks
>>> R
>>> Tibor

>>> - 2018. máj.. 10., 12:51, Demeter Tibor < [ mailto:tdeme...@itsmart.hu |
>>> tdeme...@itsmart.hu ] > írta:

 Hi,

 I've attached the vdsm and supervdsm logs. But I don't have engine.log 
 here,
 because that is on hosted engine vm. Should I send that ?

 Thank you

 Regards,

 Tibor
 - 2018. máj.. 10., 12:30, Sahina Bose < [ mailto:sab...@redhat.com |
 sab...@redhat.com ] > írta:

> There's a bug here. Can you log one attaching this engine.log and also 
> vdsm.log
> & supervdsm.log from n3.itsmart.cloud

> On Thu, May 10, 2018 at 3:35 PM, Demeter Tibor < [ 
> mailto:tdeme...@itsmart.hu |
> tdeme...@itsmart.hu ] > wrote:

>> Hi,

>> I found this:

>> 2018-05-10 03:24:19,096+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterVolumeAdvancedDetailsVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterVolumeAdvancedDetailsVDSCommand, return:
>> org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeAdvancedDetails@ca97448e,
>> log id: 347435ae
>> 2018-05-10 03:24:19,097+02 ERROR
>> [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] 
>> (DefaultQuartzScheduler7)
>> [43f4eaec] Error while refreshing brick statuses for volume 'volume2' of
>> cluster 'C6220': null
>> 2018-05-10 03:24:19,097+02 INFO
>> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>> (DefaultQuartzScheduler8)
>> [7715ceda] Failed to acquire lock and wait lock
>> 'EngineLock:{exclusiveLocks='[59c10db3-0324-0320-0120-0339=GLUSTER]',
>> sharedLocks=''}'
>> 2018-05-10 03:24:19,104+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'}),
>> log id: 6908121d
>> 2018-05-10 03:24:19,106+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n4.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='3ddef95f-158d-407c-a7d8-49641e012755'})'
>> execution failed: null
>> 2018-05-10 03:24:19,106+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 6908121d
>> 2018-05-10 03:24:19,107+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'}),
>> log id: 735c6a5f
>> 2018-05-10 03:24:19,109+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n1.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='8e737bab-e0bb-4f16-ab85-e24e91882f57'})'
>> execution failed: null
>> 2018-05-10 03:24:19,109+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] FINISH,
>> GetGlusterLocalLogicalVolumeListVDSCommand, log id: 735c6a5f
>> 2018-05-10 03:24:19,110+02 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] START,
>> GetGlusterLocalLogicalVolumeListVDSCommand(HostName = n2.itsmart.cloud,
>> VdsIdVDSCommandParametersBase:{hostId='06e361ef-3361-4eaa-9923-27fa1a0187a4'}),
>> log id: 6f9e9f58
>> 2018-05-10 03:24:19,112+02 ERROR
>> [org.ovirt.engine.core.vdsbroker.gluster.GetGlusterLocalLogicalVolumeListVDSCommand]
>> (DefaultQuartzScheduler7) [43f4eaec] Command
>> 'GetGlusterLocalLogicalVolumeListVDSCommand(HostName = 

[ovirt-users] Re: ovirt 4.2 failed deploy

2018-05-15 Thread Alex K
Hi,

Thanx for the feedback.

*getent ahostsv4 v0.mydomain*

gives:

172.16.30.10STREAM v0
172.16.30.10DGRAM
172.16.30.10RAW

which means that

*getent ahostsv4 v0.mydomain | grep v0.mydomain*

gives null

I overcame this by using the flag *--noansible* to proceed with the python
way and it did succeed.

Now I am stuck at engine-setup create CA step. It never finishes and I see
several errors at setup log (grep -iE 'error|fail' ):

2018-05-15 03:40:03,749-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'False'
2018-05-15 03:40:03,751-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:40:04,338-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'False'
2018-05-15 03:40:04,339-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:40:04,532-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV
OVESETUP_CORE/failOnDulicatedConstant=bool:'False'
2018-05-15 03:40:04,809-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV
OVESETUP_PROVISIONING/postgresExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_vacuum_scale_factor', 'expected': 0.01,
'error_msg': '{key} required to be at most {expected}'}, {'ok':  at 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_analyze_scale_factor', 'expected': 0.075,
'error_msg': '{key} required to be at most {expected}'}, {'ok':  at 0x7ff163099410>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key}
required to be at least {expected}'}, {'ok':  at
0x7ff163099488>, 'check_on_use': True, 'needeOperationalError: FATAL:
*password
authentication failed for user "engine"*
FATAL:  password authentication failed for user "engine"
2018-05-15 03:40:11,408-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'False'
2018-05-15 03:40:11,417-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:40:11,441-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV
OVESETUP_CORE/failOnDulicatedConstant=bool:'False'
2018-05-15 03:40:11,457-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV
OVESETUP_PROVISIONING/postgresExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_vacuum_scale_factor', 'expected': 0.01,
'error_msg': '{key} required to be at most {expected}'}, {'ok':  at 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_analyze_scale_factor', 'expected': 0.075,
'error_msg': '{key} required to be at most {expected}'}, {'ok':  at 0x7ff163099410>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_max_workers', 'expected': 6, 'error_msg': '{key}
required to be at least {expected}'}, {'ok':  at
0x7ff163099488>, 'check_on_use': True, 'needed_on_create': True, 'key':
'maintenance_work_mem', 'expected': 65536, 'error_msg': '{key} required to
be at least {expected}', 'useQueryForValue': True}, {'ok':  at 0x7ff163099500>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'work_mem', 'expected': 8192, 'error_msg': '{key} required to
be at least {expected}', 'useQueryForValue': True})'
raise RuntimeError("SIG%s" % signum)
RuntimeError: SIG2
raise RuntimeError("SIG%s" % signum)
RuntimeError: SIG2
2018-05-15 03:41:19,888-0400 ERROR otopi.context
context._executeMethod:152 *Failed
to execute stage 'Misc configuration': SIG2*
2018-05-15 03:41:19,993-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'True'
2018-05-15 03:41:19,993-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError('SIG2',), )]'
2018-05-15 03:41:20,033-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'True'
2018-05-15 03:41:20,033-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError('SIG2',), )]'
2018-05-15 03:41:20,038-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV CORE/failOnPrioOverride=bool:'True'
2018-05-15 03:41:20,056-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV
OVESETUP_CORE/failOnDulicatedConstant=bool:'False'
2018-05-15 03:41:20,069-0400 DEBUG otopi.context
context.dumpEnvironment:869 ENV
OVESETUP_PROVISIONING/postgresExtraConfigItems=tuple:'({'ok':  at 0x7ff1630b9578>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_vacuum_scale_factor', 'expected': 0.01,
'error_msg': '{key} required to be at most {expected}'}, {'ok':  at 0x7ff1630b9a28>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_analyze_scale_factor', 'expected': 0.075,
'error_msg': '{key} required to be at most {expected}'}, {'ok':  at 0x7ff163099410>, 'check_on_use': True, 'needed_on_create':
True, 'key': 'autovacuum_max_workers', 

[ovirt-users] Re: Remote DB: How do you set server_version?

2018-05-15 Thread Yedidyah Bar David
On Thu, May 3, 2018 at 9:53 AM, Yedidyah Bar David  wrote:
> On Thu, May 3, 2018 at 12:13 AM, Roy Golan  wrote:
>>
>>
>> On Wed, 2 May 2018 at 23:27 Jamie Lawrence 
>> wrote:
>>>
>>>
>>> I've been down this road. Postgres won't lie about its version for you.
>>> If you want to do this, you have to patch the Ovirt installer[1]. I stopped
>>> trying to use my PG cluster at some point -  the relationship between the
>>> installer and the product combined with the overly restrictive requirements
>>> baked into the installer[2]) makes doing so  an ongoing hassle. So I treat
>>> Ovirt's PG as an black box; disappointing, considering that we are a very
>>> heavy PG shop with a lot of expertise and automation I can't use with Ovirt.
>
> Sorry about that, but not sure it's such a bad choice.
>
>>>
>>> If nothing has changed (my notes are from a few versions ago), everything
>>> you need to correct is in
>>>
>>>
>>> /usr/share/ovirt-engine/setup/ovirt_engine_setup/engine_common/constants.py
>>>
>>> Aside from the version, you'll also have to make the knobs for vacuuming
>>> match those of your current installation, and I think there was another
>>> configurable for something else I'm not remembering right now.
>>>
>>> Be aware that doing so is accepting an ongoing commitment to monkeying
>>> with the installer a lot. At one time I thought doing so was the right
>>> tradeoff, but it turns out I  was wrong.
>>>
>>> -j
>>>
>>> [1] Or you could rebuild PG with a fake version. That option was
>>> unavailable here.
>>> [2] Not criticizing, just stating a technical fact. How folks apportion
>>> their QA resources is their business.
>>>
>>> > On May 2, 2018, at 12:49 PM, ~Stack~  wrote:
>>> >
>>> > Greetings,
>>> >
>>> > Exploring hosting my engine and ovirt_engine_history db's on my
>>> > dedicated PostgreSQL server.
>>> >
>>> > This is a 9.5 install on a beefy box from the postgresql.org yum repos
>>> > that I'm using for other SQL needs too. 9.5.12 to be exact. I set up the
>>> > database just as the documentation says and I'm doing a fresh install of
>>> > my engine-setup.
>>> >
>>> > During the install, right after I give it the details for the remote I
>>> > get this error:
>>> > [ ERROR ] Please set:
>>> >  server_version = 9.5.9
>>> > in postgresql.conf on 'None'. Its location is usually
>>> > /var/lib/pgsql/data , or somewhere under /etc/postgresql* .
>>> >
>>> > Huh?
>>> >
>>
>>
>> Yes it's annoying and I think +Yaniv Dary opened a bug for it after both of
>> got mad at it. Yaniv?
>
> Yaniv did, and I asked for details. Comments are welcome:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1573091

Also filed now a bug about the text:

https://bugzilla.redhat.com/show_bug.cgi?id=1578276

Feel free to comment there, and/or on the patch linked to it. Thanks.

>
> Of course, if it's so annoying, and we are so confident in PG's compatibility
> inside z-stream, we can simply lax the test by checking only x.y but changing
> no other functionality, and discuss something stronger later on (if at all).
>
> Pushed this for now, didn't verify:
>
> https://gerrit.ovirt.org/90866
>
> Ideally, "verification" isn't merely checking that it works as expected, but
> also coming up with means to enhance our confidence that it's indeed safe.
>
> But it might not be such a big risk to merge this anyway, even for 4.2.
>
>>
>> Meanwhile let us know if you were able to patch constants.py as suggested.
>>
>>> > Um. OK.
>>> > $ grep ^server_version postgresql.conf
>>> > server_version = 9.5.9
>>> >
>>> > $ systemctl restart postgresql-9.5.service
>>> >
>>> > LOG:  syntax error in file "/var/lib/pgsql/9.5/data/postgresql.conf"
>>> > line 33, n...n ".9"
>>> > FATAL:  configuration file "/var/lib/pgsql/9.5/data/postgresql.conf"
>>> > contains errors
>>> >
>>> >
>>> > Well that didn't work. Let's try something else.
>>> >
>>> > $ grep ^server_version postgresql.conf
>>> > server_version = 9.5.9
>>> >
>>> > $ systemctl restart postgresql-9.5.service
>>> > LOG:  parameter "server_version" cannot be changed
>>> > FATAL:  configuration file "/var/lib/pgsql/9.5/data/postgresql.conf"
>>> > contains errors
>>> >
>>> > Whelp. That didn't work either. I can't seem to find anything in the
>>> > oVirt docs on setting this.
>>> >
>>> > How am I supposed to do this?
>>> >
>>> > Thanks!
>>> > ~Stack~
>>> >
>>> > ___
>>> > Users mailing list
>>> > Users@ovirt.org
>>> > http://lists.ovirt.org/mailman/listinfo/users
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
> Didi



-- 
Didi
___
Users mailing list -- 

[ovirt-users] Re: Does Mailing List DEAD ?

2018-05-15 Thread Duck
Quack,

On 05/15/2018 04:15 PM, Yedidyah Bar David wrote:
> On Tue, May 15, 2018 at 5:26 AM,   wrote:
>> It seems since the upgrade of Mailing List I do not receive any mail from 
>> oVirt Mailing List, I also tried to post and do not receive any ?
> 
> It's not dead, but there are (/were?) some problems. Adding Duck.

I see mails from/to Paul without any error.

Example:
May 15 03:16:24 mail postfix/smtp[6389]: 160673FC3B:
to=,
relay=gmail-smtp-in.l.google.com[64.233.168.27]:25, delay=36,
delays=34/0.02/0.57/1.2, dsn=2.0.0, status=sent (250 2.0.0 OK 1526368584
d9-v6si4100019ote.57 - gsmtp)

Could you check you spambox?

There were and are (unfortunately, WIP) problems, but nothing related to
sending and receiving. See the "Mailing-Lists upgrade" thread, and one
or two others.

\_o<



signature.asc
Description: OpenPGP digital signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Ovirt Cpu pinning and Oracle database license with hardware partitioning.

2018-05-15 Thread Gianluca Cecchi
On Mon, May 14, 2018 at 6:55 PM, Vinícius Ferrão  wrote:

> AFAIK this is the way to keep Oracle quiet: http://captainkvm.com/2012/10/
> virtualizing-oracle-11g-on-rhev-3-0-netapp/


Hi,
that document, even if dated, simply confirms in its contents what we
already wrote:

"
Oracle considers RHEV and most other virtualization platforms to be “soft”
partitioning. The only virtualization platform that Oracle supports under
hard partitioning is their own OVM. Really, all they are really doing is
pinning CPUs to a VM, and therefore the Oracle database, but you can do
that with any hypervisor… Honestly, I can’t roll my eyes hard enough to
indicate my disdain..
"

And it contains indications about techniques with Netapp to be able to
easily reproduce the environment on physical hardware in case you have
problems with Oracle in your virtualized RDBMS.

There are also collaborations between vendors to simplify management in
case of problems and need of opening an SR:
https://www.vmware.com/it/support/policies/oracle-support.html

There is also official Oracle Document ID related to VMware: 249212.1
and I think it could apply to other virtualization technologies too.

HIH,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Ovirt Cpu pinning and Oracle database license with hardware partitioning.

2018-05-15 Thread Gianluca Cecchi
On Mon, May 14, 2018 at 6:04 PM, Karli Sjöberg  wrote:

>
>
> Not so fair in my opinion.
>
>
> LOL, of course it's not fair, it's Oracle :D
>
> /K
>
>
Nothing is written into the stone, so even Oracle can change its mindset,
when is not corroborated by real technical arguments.
Also because a document with "Educational purpose only" in its footer is
already clear enough about what it contains.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Scheduling a Snapshot of a Gluster volume not working within Ovirt

2018-05-15 Thread Mark Betham
Hi Sahina,

Many thanks for your response.

I have now raised a bug against this issue.  For your reference it is bug 
#1578257 - https://bugzilla.redhat.com/show_bug.cgi?id=1578257 


I will enable debuging today as requested and attach the logs to the bug report.

Many thanks,

Mark Betham


> On 14 May 2018, at 12:34, Sahina Bose  wrote:
> 
> 
> 
> On Mon, May 14, 2018 at 4:07 PM, Mark Betham  > wrote:
> Hi Sahina,
> 
> Many thanks for your response and apologies for my delay in getting back to 
> you.
> 
> 
>> How was the schedule created - is this using the Remote Data Sync Setup 
>> under Storage domain?
> 
> 
> Ovirt is configured in ‘Gluster’ mode, no VM support.  When snapshotting we 
> are taking a snapshot of the full Gluster volume.
> 
> To configure the snapshot schedule I did the following;
> Login to Ovirt WebUI
> From left hand menu select ‘Storage’ and ‘Volumes'
> I then selected the volume I wanted to snapshot by clicking on the link 
> within the ‘Name’ column
> From here I selected the ‘Snapshots’ tab
> From the top menu options I selected the drop down ‘Snapshot’
> From the drop down options I selected ‘New’
> A new window appeared titled ‘Create/Schedule Snapshot’
> I entered a snapshot prefix and description into the available fields and 
> selected the ‘Schedule’ page
> On the schedule page I selected ‘Minute’ from the ‘Recurrence’ drop down
> Set ‘Interval’ to every ’30’ minutes
> Changed timezone to ‘Europe/London=(GMT+00:00) London Standard Time’
> Left value in ‘Start Schedule by’ at default value
> Set schedule to ‘No End Date’
> Click 'OK'
> 
> Interestingly I get the following message on the ‘Create/Schedule Snapshot’ 
> page before clicking on OK;
> Frequent creation of snapshots would overload the cluster
> Gluster CLI based snapshot scheduling is enabled. It would be disabled once 
> volume snapshots scheduled from UI.
> 
> What is interesting is that I have not enabled 'Gluster CLI based snapshot 
> scheduling’.
> 
> After clicking OK I am returned to the Volume Snapshots tab.
> 
> From this point I get no snapshots created according to the schedule set.
> 
> At the time of clicking OK in the WebUI to enable the schedule I get the 
> following in the engine log;
> 2018-05-14 09:24:11,068Z WARN  
> [org.ovirt.engine.core.dal.job.ExecutionMessageDirector] (default task-128) 
> [85d0b16f-2c0c-464f-bbf1-682c062a4871] The message key 
> 'ScheduleGlusterVolumeSnapshot' is missing from 'bundles/ExecutionMessages'
> 2018-05-14 09:24:11,090Z INFO  
> [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] 
> (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Before acquiring 
> and wait lock 
> 'EngineLock:{exclusiveLocks='[712da1df-4c11-405a-8fb6-f99aebc185c1=GLUSTER_SNAPSHOT]',
>  sharedLocks=''}'
> 2018-05-14 09:24:11,090Z INFO  
> [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] 
> (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Lock-wait acquired 
> to object 
> 'EngineLock:{exclusiveLocks='[712da1df-4c11-405a-8fb6-f99aebc185c1=GLUSTER_SNAPSHOT]',
>  sharedLocks=''}'
> 2018-05-14 09:24:11,111Z INFO  
> [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] 
> (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Running command: 
> ScheduleGlusterVolumeSnapshotCommand internal: false. Entities affected :  
> ID: 712da1df-4c11-405a-8fb6-f99aebc185c1 Type: GlusterVolumeAction group 
> MANIPULATE_GLUSTER_VOLUME with role type ADMIN
> 2018-05-14 09:24:11,148Z INFO  
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] EVENT_ID: 
> GLUSTER_VOLUME_SNAPSHOT_SCHEDULED(4,134), Snapshots scheduled on volume 
> glustervol0 of cluster NOSS-LD5.
> 2018-05-14 09:24:11,156Z INFO  
> [org.ovirt.engine.core.bll.gluster.ScheduleGlusterVolumeSnapshotCommand] 
> (default task-128) [85d0b16f-2c0c-464f-bbf1-682c062a4871] Lock freed to 
> object 
> 'EngineLock:{exclusiveLocks='[712da1df-4c11-405a-8fb6-f99aebc185c1=GLUSTER_SNAPSHOT]',
>  sharedLocks=''}'
> 
>> Could you please provide the engine.log from the time the schedule was setup 
>> and including the time the schedule was supposed to run?
> 
> 
> The original log file is no longer present, so I removed the old schedule and 
> created a new schedule, as per the instructions above, earlier today.  I have 
> therefor attached the engine log from today.  The new schedule, which was set 
> to run every 30 minutes, has not produced any snapshots after around 2 hours.
> 
> Please let me know if you require any further information.
> 
> 
> I see the following messages in logs: 
> 2018-05-14 04:30:00,018Z ERROR [org.ovirt.engine.core.utils.timer.JobWrapper] 
> (QuartzOvirtDBScheduler9) [d0c31a9] Failed to invoke scheduled method 
> onTimer: null
> 
> Can you 

[ovirt-users] Re: Does Mailing List DEAD ?

2018-05-15 Thread Yedidyah Bar David
On Tue, May 15, 2018 at 5:26 AM,   wrote:
> It seems since the upgrade of Mailing List I do not receive any mail from 
> oVirt Mailing List, I also tried to post and do not receive any ?

It's not dead, but there are (/were?) some problems. Adding Duck.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: 4.2.3 -- Snapshot in GUI Issue

2018-05-15 Thread Idan Shaby
Hi Zack,

It's there, under a specific VM, Snapshots subtab, select the specific
snapshot and there you have the Preview/Commit buttons on the right.
If you need any further help, don't hesitate to ask.




Regards,
Idan

On Sun, May 13, 2018 at 3:23 AM, Zack Gould  wrote:

> Is there no way to restore a snapshot via the GUI on 4.2 anymore?
>
> I can't take snapshot, but there's no restore option. Since the new GUI
> design, it appears that it's missing?
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: