[ovirt-users] Re: Gluster setup Problem

2019-02-25 Thread Parth Dhanjal
Hey Matthew!

Can you please provide me with the following to help you debug the issue
that you are facing?
1. oVirt and gdeploy version
2. /var/log/messages file
3. /root/.gdeploy file

On Mon, Feb 25, 2019 at 1:23 PM Parth Dhanjal  wrote:

> Hey Matthew!
>
> Can you please provide which oVirt and gdeploy version have you installed?
>
> Regards
> Parth Dhanjal
>
> On Mon, Feb 25, 2019 at 12:56 PM Sahina Bose  wrote:
>
>> +Gobinda Das +Dhanjal Parth can you please check?
>>
>> On Fri, Feb 22, 2019 at 11:52 PM Matthew Roth  wrote:
>> >
>> > I have 3 servers,  Node 1 is 3tb /dev/sda, Node 2, 3tb /dev/sdb,  node3
>> 3tb /dev/sdb
>> >
>> > I start the process for gluster deployment. I change node 1 to sda and
>> all the other ones to sdb. I get no errors however,
>> >
>> > when I get to
>> > Creating physical Volume all it does is spin forever . doesnt get any
>> further. I can leave it there for 5 hours and doesn't go anywhere.
>> >
>> > #gdeploy configuration generated by cockpit-gluster plugin
>> > [hosts]
>> > cmdnode1.cmd911.com
>> > cmdnode2.cmd911.com
>> > cmdnode3.cmd911.com
>> >
>> > [script1:cmdnode1.cmd911.com]
>> > action=execute
>> > ignore_script_errors=no
>> > file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda -h
>> cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com
>> >
>> > [script1:cmdnode2.cmd911.com]
>> > action=execute
>> > ignore_script_errors=no
>> > file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com
>> >
>> > [script1:cmdnode3.cmd911.com]
>> > action=execute
>> > ignore_script_errors=no
>> > file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com
>> >
>> > [disktype]
>> > raid6
>> >
>> > [diskcount]
>> > 12
>> >
>> > [stripesize]
>> > 256
>> >
>> > [service1]
>> > action=enable
>> > service=chronyd
>> >
>> > [service2]
>> > action=restart
>> > service=chronyd
>> >
>> > [shell2]
>> > action=execute
>> > command=vdsm-tool configure --force
>> >
>> > [script3]
>> > action=execute
>> > file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
>> > ignore_script_errors=no
>> >
>> > [pv1:cmdnode1.cmd911.com]
>> > action=create
>> > devices=sda
>> > ignore_pv_errors=no
>> >
>> > [pv1:cmdnode2.cmd911.com]
>> > action=create
>> > devices=sdb
>> > ignore_pv_errors=no
>> >
>> > [pv1:cmdnode3.cmd911.com]
>> > action=create
>> > devices=sdb
>> > ignore_pv_errors=no
>> >
>> > [vg1:cmdnode1.cmd911.com]
>> > action=create
>> > vgname=gluster_vg_sda
>> > pvname=sda
>> > ignore_vg_errors=no
>> >
>> > [vg1:cmdnode2.cmd911.com]
>> > action=create
>> > vgname=gluster_vg_sdb
>> > pvname=sdb
>> > ignore_vg_errors=no
>> >
>> > [vg1:cmdnode3.cmd911.com]
>> > action=create
>> > vgname=gluster_vg_sdb
>> > pvname=sdb
>> > ignore_vg_errors=no
>> >
>> > [lv1:cmdnode1.cmd911.com]
>> > action=create
>> > poolname=gluster_thinpool_sda
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sda
>> > lvtype=thinpool
>> > size=1005GB
>> > poolmetadatasize=5GB
>> >
>> > [lv2:cmdnode2.cmd911.com]
>> > action=create
>> > poolname=gluster_thinpool_sdb
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sdb
>> > lvtype=thinpool
>> > size=1005GB
>> > poolmetadatasize=5GB
>> >
>> > [lv3:cmdnode3.cmd911.com]
>> > action=create
>> > poolname=gluster_thinpool_sdb
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sdb
>> > lvtype=thinpool
>> > size=41GB
>> > poolmetadatasize=1GB
>> >
>> > [lv4:cmdnode1.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_engine
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sda
>> > mount=/gluster_bricks/engine
>> > size=100GB
>> > lvtype=thick
>> >
>> > [lv5:cmdnode1.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_data
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sda
>> > mount=/gluster_bricks/data
>> > lvtype=thinlv
>> > poolname=gluster_thinpool_sda
>> > virtualsize=500GB
>> >
>> > [lv6:cmdnode1.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_vmstore
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sda
>> > mount=/gluster_bricks/vmstore
>> > lvtype=thinlv
>> > poolname=gluster_thinpool_sda
>> > virtualsize=500GB
>> >
>> > [lv7:cmdnode2.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_engine
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sdb
>> > mount=/gluster_bricks/engine
>> > size=100GB
>> > lvtype=thick
>> >
>> > [lv8:cmdnode2.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_data
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sdb
>> > mount=/gluster_bricks/data
>> > lvtype=thinlv
>> > poolname=gluster_thinpool_sdb
>> > virtualsize=500GB
>> >
>> > [lv9:cmdnode2.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_vmstore
>> > ignore_lv_errors=no
>> > vgname=gluster_vg_sdb
>> > mount=/gluster_bricks/vmstore
>> > lvtype=thinlv
>> > poolname=gluster_thinpool_sdb
>> > virtualsize=500GB
>> >
>> > [lv10:cmdnode3.cmd911.com]
>> > action=create
>> > lvname=gluster_lv_engine
>> > ignore_lv_errors=no
>> > 

[ovirt-users] Re: Adding user to internalz with webadmin?

2019-02-25 Thread Lucie Leistnerova

Hi Juhani,


On 2/26/19 6:21 AM, Juhani Rautiainen wrote:

Hi!

How do you add users to internalz with webadmin? I mean there is add a
button in Administration->Users which opens 'Add Users and
Groups'-window . Windows has  'Add'- and 'Add and close'- buttons in
the bottom. Just can't figure out how they work. Pushing either of
those Add-buttons just closes the window.
You can't add user to internal directly from webadmin. Use tool on the 
engine ovirt-aaa-jdbc-tool


# ovirt-aaa-jdbc-tool user add test
# ovirt-aaa-jdbc-tool user password-reset test 
--password-valid-to="2020-01-01 00:00:00Z"


See 
https://www.ovirt.org/documentation/admin-guide/chap-Users_and_Roles.html


Adding user in webadmin works only for already created user to see/set 
permissions for them. In the dialog is Search row, when you press Go and 
internal is selected in the first selectbox, all users will be displayed 
in table below. You check some and then press Add.



-Juhani
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y4GFUACEHEEHYPHCIXDKX3UGRHB5FD23/

Best regards,

--
Lucie Leistnerova
Quality Engineer, QE Cloud, RHVM
Red Hat EMEA

IRC: lleistne @ #rhev-qe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PBAOK73H3LPPIEYFSO5JGVWL5VMCQAPI/


[ovirt-users] Re: Tracking down high writes in GlusterFS volume

2019-02-25 Thread Krutika Dhananjay
On Fri, Feb 15, 2019 at 12:30 AM Jayme  wrote:

> Running an oVirt 4.3 HCI 3-way replica cluster with SSD backed storage.
> I've noticed that my SSD writes (smart Total_LBAs_Written) are quite high
> on one particular drive.  Specifically I've noticed one volume is much much
> higher total bytes written than others (despite using less overall space).
>

Writes are higher on one particular volume? Or did one brick witness more
writes than its two replicas within the same volume? Could you share the
volume info output of the affected volume plus the name of the affected
brick if at all the issue is with one single brick?

Also, did you check if the volume was undergoing any heals (`gluster volume
heal  info`)?

-Krutika

My volume is writing over 1TB of data per day (by my manual calculation,
> and with glusterfs profiling) and wearing my SSDs quickly, how can I best
> determine which VM or process is at fault here?
>
> There are 5 low use VMs using the volume in question.  I'm attempting to
> track iostats on each of the vm's individually but so far I'm not seeing
> anything obvious that would account for 1TB of writes per day that the
> gluster volume is reporting.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OZHZXQS4GUPPJXOZSBTO6X5ZL6CATFXK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFI6HEFQ3JRB47VQ2CNIAUAQDUKTKPT6/


[ovirt-users] Re: VM poor iops

2019-02-25 Thread Sahina Bose
On Fri, Sep 14, 2018 at 3:35 PM Paolo Margara 
wrote:

> Hi,
>
> but performance.strict-o-direct is not one of the option enabled by
> gdeploy during installation because it's supposed to give some sort of
> benefit?
>

See
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGGDZVRGBEM6OJCAFEM3R/
on why the option is set.


> Paolo
>
> Il 14/09/2018 11:34, Leo David ha scritto:
>
> performance.strict-o-direct:  on
> This was the bloody option that created the botleneck ! It was ON.
> So now i get an average of 17k random writes,  which is not bad at all.
> Below,  the volume options that worked for me:
>
> performance.strict-write-ordering: off
> performance.strict-o-direct: off
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> network.ping-timeout: 30
> performance.quick-read: off
> cluster.eager-lock: enable
> performance.stat-prefetch: on
> performance.low-prio-threads: 32
> network.remote-dio: off
> user.cifs: off
> performance.io-cache: off
> server.allow-insecure: on
> features.shard: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> nfs.disable: on
>
> If any other tweaks can be done,  please let me know.
> Thank you !
>
> Leo
>
>
> On Fri, Sep 14, 2018 at 12:01 PM, Leo David  wrote:
>
>> Hi Everyone,
>> So i have decided to take out all of the gluster volume custom options,
>> and add them one by one while activating/deactivating the storage domain &
>> rebooting one vm after each  added option :(
>>
>> The default options that giving bad iops ( ~1-2k) performance are :
>>
>> performance.stat-prefetch on
>> cluster.eager-lock enable
>> performance.io-cache off
>> performance.read-ahead off
>> performance.quick-read off
>> user.cifs off
>> network.ping-timeout 30
>> network.remote-dio off
>> performance.strict-o-direct on
>> performance.low-prio-threads 32
>>
>> After adding only:
>>
>>
>> server.allow-insecure on
>> features.shard on
>> storage.owner-gid 36
>> storage.owner-uid 36
>> transport.address-family inet
>> nfs.disable on
>> The performance increased to 7k-10k iops.
>>
>> The problem is that i don't know if that's sufficient ( maybe it can be
>> more improved ) , or even worse than this there might be chances to into
>> different volume issues by taking out some volume really needed options...
>>
>> If would have handy the default options that are applied to volumes as
>> optimization in a 3way replica, I think that might help..
>>
>> Any thoughts ?
>>
>> Thank you very much !
>>
>>
>> Leo
>>
>>
>>
>>
>>
>> On Fri, Sep 14, 2018 at 8:54 AM, Leo David  wrote:
>>
>>> Any thoughs on these ? Is that UI optimization only a gluster volume
>>> custom configuration ? If so, i guess it can be done from cli, but I am not
>>> aware of the corect optimized parameters of the volume
>>>
>>>
>>> On Thu, Sep 13, 2018, 18:25 Leo David  wrote:
>>>
 Thank you Jayme. I am trying to do this, but I am getting an error,
 since the volume is replica 1 distribute, and it seems that oVirt expects a
 replica 3 volume.
 Would it be another way to optimize the volume in this situation ?


 On Thu, Sep 13, 2018, 17:49 Jayme  wrote:

> I had similar problems until I clicked "optimize volume for vmstore"
> in the admin GUI for each data volume.  I'm not sure if this is what is
> causing your problem here but I'd recommend trying that first.  It is
> suppose to be optimized by default but for some reason my ovirt 4.2 
> cockpit
> deploy did not apply those settings automatically.
>
> On Thu, Sep 13, 2018 at 10:21 AM Leo David  wrote:
>
>> Hi Everyone,
>> I am encountering the following issue on a single instance
>> hyper-converged 4.2 setup.
>> The following fio test was done:
>>
>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
>> --name=test --filename=test --bs=4k --iodepth=64 --size=4G
>> --readwrite=randwrite
>> The results are very poor doing the test inside of a vm with a
>> prealocated disk on the ssd store:  ~2k IOPS
>> Same test done on the oVirt node directly on the mounted ssd_lvm:
>> ~30k IOPS
>> Same test done, this time on the gluster mount path: ~20K IOPS
>>
>> What could be the issue that the vms have this slow hdd performance (
>> 2k on ssd !! )?
>> Thank you very much !
>>
>>
>>
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/
>>
>
>>
>>
>> --
>> Best regards, Leo David
>>
>
>
>
> --
> Best 

[ovirt-users] Re: Add compute nodes offline

2019-02-25 Thread xil...@126.com
Ok, thank you very much. I use ovirt-node



xil...@126.com
 
From: Yedidyah Bar David
Date: 2019-02-25 16:34
To: zeng xiansheng
CC: users
Subject: Re: [ovirt-users] Add compute nodes offline
On Mon, Feb 25, 2019 at 9:42 AM  wrote:
>
> Hi, when I add compute nodes, I do not intend to add compute nodes to the 
> engine controller without connecting to the external network:
> An error has occurred during installation of Host node5.test.com: Failed to 
> execute stage 'Environment customization': Cannot retrieve metalink for 
> repository:
> Please verify its path and try again. Do I need to manually download some 
> offline packages in this case?
 
Are you using ovirt-node, or plain os (el7)?
 
The above error seems to be a result of bad yum repo file(s).
 
You can simply remove it/them.
 
If you use node, you should be set.
 
If plain OS, you can try installing beforehand 'ovirt-host', I think
this should be enough. If it's not, please check/share host-deploy
logs - you should find there which packages are missing.
 
Obviously, the engine will then not be able to know if there updates,
so upgrades will be up to you to handle.
 
Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/35KEBYRR4MW3DRHK4KHDOWCZY63G6IPY/


[ovirt-users] Windows service applications migrated to the ovirt platform

2019-02-25 Thread xilazz
Hi,I now have a physical machine on which I want to migrate Windows 
applications to the ovirt management platform.
May I ask if virt-p2v can be used for migration?
Does ovirt itself have a migration tool to use?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TYLV4SXINDX7ZH2K7NWBG3SLF4BSZCXF/


[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread kiv
hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : tmn1-ovirt1.corp.gseis.ru
Host ID: 1
Engine status  : {"health": "good", "vm": "up", "detail": 
"Up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 8710cfae
local_conf_timestamp   : 1966901
Host timestamp : 1966901
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1966901 (Tue Feb 26 08:10:06 2019)
host-id=1
score=3400
vm_conf_refresh_time=1966901 (Tue Feb 26 08:10:06 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : tmn1-ovirt2.corp.gseis.ru
Host ID: 2
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 51eb3b47
local_conf_timestamp   : 322387
Host timestamp : 322387
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=322387 (Tue Feb 26 10:09:48 2019)
host-id=2
score=3400
vm_conf_refresh_time=322387 (Tue Feb 26 10:09:49 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YBWA6TZCLJ6FGL6Z5A6GYOILBO4A2Y3Q/


[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread kiv
and some more logs:

2019-02-26 11:04:28,747+05 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-81) [] VM 
'fa3a78de-b329-4a58-8f06-
efd6b0e3c719' is migrating to VDS 
'45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2) ignoring it in the refresh until 
migration is done
2019-02-26 11:04:31,651+05 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-6) [] VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719' was 
reported
 as Down on VDS '45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2)
2019-02-26 11:04:31,652+05 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-6) [] START, DestroyVDSCommand(HostName = ovirt2, 
DestroyVmVDSCommandParameters:{hostId='45b1d017-16ee-4e89-97f9-c0b002427e5d', 
vmId='fa3a78de-b329-4a58-8f06-efd6b0e3c719', secondsToWait='0', gracefully='fal
se', reason='', ignoreNoVm='true'}), log id: 72ef70f2
2019-02-26 11:04:32,503+05 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-6) [] Failed to destroy VM 'fa3a78de-b329-4a58-8f06-ef
d6b0e3c719' because VM does not exist, ignoring
2019-02-26 11:04:32,503+05 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-6) [] FINISH, DestroyVDSCommand, log id: 72ef70f2
2019-02-26 11:04:32,503+05 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-6) [] VM 
'fa3a78de-b329-4a58-8f06-efd6b0e3c719'(HostedEngine
) was unexpectedly detected as 'Down' on VDS 
'45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2) (expected on 
'46af80a5-21e8-48ce-b92b-e18120f36093')
2019-02-26 11:04:32,503+05 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-6) [] Migration of VM 'HostedEngine' to host 'ovirt2' 
failed: VM destroyed during the startup.
2019-02-26 11:04:32,506+05 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-6) [] VM 
'fa3a78de-b329-4a58-8f06-efd6b0e3c719'(HostedEngine
) moved from 'MigratingFrom' --> 'Up'
2019-02-26 11:04:32,506+05 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-6) [] Adding VM 
'fa3a78de-b329-4a58-8f06-efd6b0e3c719'(Hoste
dEngine) to re-run list
2019-02-26 11:04:32,516+05 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] 
(ForkJoinPool-1-worker-6) [] Rerun VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719'. Ca
lled from VDS 'ovirt1'
2019-02-26 11:04:32,520+05 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-88142) [] START, MigrateStatusVD
SCommand(HostName = ovirt1, 
MigrateStatusVDSCommandParameters:{hostId='46af80a5-21e8-48ce-b92b-e18120f36093',
 vmId='fa3a78de-b329-4a58-8f06-efd6b0e3c719'
}), log id: 5c346b33
2019-02-26 11:04:32,524+05 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-88142) [] FINISH, MigrateStatusV
DSCommand, log id: 5c346b33
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L4BAKX73MKQDYDYKTHJQJCGQZQKTGWPV/


[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread kiv
# hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt1
Host ID: 1
Engine status  : {"health": "good", "vm": "up", "detail": 
"Up"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 69c4d342
local_conf_timestamp   : 1968825
Host timestamp : 1968824
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1968824 (Tue Feb 26 08:42:10 2019)
host-id=1
score=3400
vm_conf_refresh_time=1968825 (Tue Feb 26 08:42:10 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt2
Host ID: 2
Engine status  : {"reason": "vm not running on this host", 
"health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : a1712694
local_conf_timestamp   : 324313
Host timestamp : 324313
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=324313 (Tue Feb 26 10:41:54 2019)
host-id=2
score=3400
vm_conf_refresh_time=324313 (Tue Feb 26 10:41:54 2019)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FGTJWIBRLEJSSACMSEGG32EARXW6FM2M/


[ovirt-users] Re: Python-SDK4- Issue following links

2019-02-25 Thread Don Dupuis
Thanks again. I will check it out in more detail. I have looked at it,  but
some things wasn't that clear Most stuff in the SDK I have working, but I
get stumped sometimes.

Regards,
Don

On Mon, Feb 25, 2019 at 11:21 PM Joey Ma  wrote:

> Hi Don,
>
> So glad to see it worked. If you want to know more about how to use the
> Python SDK, the official documentation
> http://ovirt.github.io/ovirt-engine-sdk/ would introduce you the detailed
> guidance.
>
> If you have any other questions, please feel free to post here.
>
> Regards,
> Joey
>
> On Tue, Feb 26, 2019 at 12:50 PM Don Dupuis  wrote:
>
>> Joey
>>
>> That WORKED just great. I am still trying to understand the
>> services/service stuff. I was trying something similar earlier, but I was
>> using
>> connection.system_service().vnic_profiles_service().vnic_profile_service(),
>> I understand now from your code on what is going on and why was going down
>> the wrong road.
>>
>> Thanks again for your help
>>
>> Don
>>
>> On Mon, Feb 25, 2019 at 10:24 PM Joey Ma  wrote:
>>
>>>
>>> On Tue, Feb 26, 2019 at 1:00 AM Don Dupuis  wrote:
>>>
 Joey
 I am still not quite getting it. I am trying the below code and where
 it is commented out, I have tried different things, but I am no table to
 update the name of the object that I have found.

 networks_service = connection.system_service().networks_service()
 network = networks_service.list(
 search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
 print ("Network name is %s" % network.name)
 print ("Network id is %s" % network.id)
 vnics = connection.follow_link(network.vnic_profiles)
 #vnicsprofile_service =
 connection.system_service().vnic_profile_service()
 #vnicprofile_service = vnic_profiles_service.vnic_profile_service(
 vnics.id)

>>>
>>> Hi Don,
>>>
>>> The var `vnics` is actually a List, so the statement `vnics.id` would
>>> produce errors.
>>>
>>> The following codes could successfully update the name of a vnicprofile,
>>> probably meets your needs.
>>>
>>> ```python
>>> vnics = connection.follow_link(network.vnic_profiles)
>>>
>>> # Iterate the var `vnics` would be better.
>>> vnic_service =
>>> connection.system_service().vnic_profiles_service().profile_service(vnics[0].id)
>>> vnic_service.update(
>>>  types.VnicProfile(
>>>  name='the-new-name',
>>>  )
>>> )
>>> vnic = vnic_service.get()
>>> print('new name', vnic.name)
>>> ```
>>>
>>> If the above codes could not work as expected, please let me know.
>>>
>>> Regards,
>>> Joey
>>>
>>> for dev in vnics:
 print ("Dev name is %s" % dev.name)
 #vnicprofile_service.update(types.VnicProfile(
 #   name='%s' % HOSTNAME,
 #   ),
 #)
 connection.close()

 ./update-vnic.py
 Network name is ovirtmgmt
 Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
 Dev name is ovirtmgmt

 Thanks
 Don

 On Mon, Feb 25, 2019 at 12:06 AM Joey Ma  wrote:

> Hi Don,
>
> Please using `network.vnic_profiles` instead of `network.vnicprofiles`
> as the parameter of  `connection.follow_link`.
>
> Regards,
> Joey
>
>
> On Mon, Feb 25, 2019 at 9:22 AM Don Dupuis  wrote:
>
>> Hi
>>
>> I am trying to write some code to update the names of existing
>> vnicprofiles in ovirt-4.2. The problem I am having is trying to follow 
>> the
>> links to the vnicprofiles. Below is web info that I am trying to get:
>>
>> > href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4"
>> id="740cae1f-c49f-4563-877a-5ce173e83be4">ovirtmgmtLOOKING> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/permissions"
>> rel="permissions"/>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/vnicprofiles"
>> rel="vnicprofiles"/>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/networklabels"
>> rel="networklabels"/>0falsevm> id="4050"/>> href="/ovirt-engine/api/datacenters/1d00d32b-abdc-43cd-b990-257aaf01d514"
>> id="1d00d32b-abdc-43cd-b990-257aaf01d514"/>
>>
>> Below is the code that I am trying to do the same thing and I want to
>> follow the vnicprofiles link to get to the actual data that I want to
>> change:
>> #!/usr/bin/env python
>>
>> import logging
>> import time
>> import string
>> import sys
>> import os
>> import MySQLdb
>>
>> import ovirtsdk4 as sdk
>> import ovirtsdk4.types as types
>>
>> #logging.basicConfig(level=logging.DEBUG, filename='/tmp/addhost.log')
>>
>> ### Variables to be used ###
>> #NUMANODE = 3
>> #MEM = 20
>> GB = 1024 * 1024 * 1024
>> #MEMORY = MEM * GB
>> GB = 1024 * 1024 * 1024
>> URL = 'https://host/ovirt-engine/api'
>> CAFILE = '/etc/pki/ovirt-engine/ca.pem'
>> USERNAME = 'admin@internal'
>> PASSWORD = 

[ovirt-users] Re: Python-SDK4- Issue following links

2019-02-25 Thread Joey Ma
Hi Don,

So glad to see it worked. If you want to know more about how to use the
Python SDK, the official documentation
http://ovirt.github.io/ovirt-engine-sdk/ would introduce you the detailed
guidance.

If you have any other questions, please feel free to post here.

Regards,
Joey

On Tue, Feb 26, 2019 at 12:50 PM Don Dupuis  wrote:

> Joey
>
> That WORKED just great. I am still trying to understand the
> services/service stuff. I was trying something similar earlier, but I was
> using
> connection.system_service().vnic_profiles_service().vnic_profile_service(),
> I understand now from your code on what is going on and why was going down
> the wrong road.
>
> Thanks again for your help
>
> Don
>
> On Mon, Feb 25, 2019 at 10:24 PM Joey Ma  wrote:
>
>>
>> On Tue, Feb 26, 2019 at 1:00 AM Don Dupuis  wrote:
>>
>>> Joey
>>> I am still not quite getting it. I am trying the below code and where it
>>> is commented out, I have tried different things, but I am no table to
>>> update the name of the object that I have found.
>>>
>>> networks_service = connection.system_service().networks_service()
>>> network = networks_service.list(
>>> search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
>>> print ("Network name is %s" % network.name)
>>> print ("Network id is %s" % network.id)
>>> vnics = connection.follow_link(network.vnic_profiles)
>>> #vnicsprofile_service =
>>> connection.system_service().vnic_profile_service()
>>> #vnicprofile_service = vnic_profiles_service.vnic_profile_service(
>>> vnics.id)
>>>
>>
>> Hi Don,
>>
>> The var `vnics` is actually a List, so the statement `vnics.id` would
>> produce errors.
>>
>> The following codes could successfully update the name of a vnicprofile,
>> probably meets your needs.
>>
>> ```python
>> vnics = connection.follow_link(network.vnic_profiles)
>>
>> # Iterate the var `vnics` would be better.
>> vnic_service =
>> connection.system_service().vnic_profiles_service().profile_service(vnics[0].id)
>> vnic_service.update(
>>  types.VnicProfile(
>>  name='the-new-name',
>>  )
>> )
>> vnic = vnic_service.get()
>> print('new name', vnic.name)
>> ```
>>
>> If the above codes could not work as expected, please let me know.
>>
>> Regards,
>> Joey
>>
>> for dev in vnics:
>>> print ("Dev name is %s" % dev.name)
>>> #vnicprofile_service.update(types.VnicProfile(
>>> #   name='%s' % HOSTNAME,
>>> #   ),
>>> #)
>>> connection.close()
>>>
>>> ./update-vnic.py
>>> Network name is ovirtmgmt
>>> Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
>>> Dev name is ovirtmgmt
>>>
>>> Thanks
>>> Don
>>>
>>> On Mon, Feb 25, 2019 at 12:06 AM Joey Ma  wrote:
>>>
 Hi Don,

 Please using `network.vnic_profiles` instead of `network.vnicprofiles`
 as the parameter of  `connection.follow_link`.

 Regards,
 Joey


 On Mon, Feb 25, 2019 at 9:22 AM Don Dupuis  wrote:

> Hi
>
> I am trying to write some code to update the names of existing
> vnicprofiles in ovirt-4.2. The problem I am having is trying to follow the
> links to the vnicprofiles. Below is web info that I am trying to get:
>
>  href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4"
> id="740cae1f-c49f-4563-877a-5ce173e83be4">ovirtmgmtLOOKING href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/permissions"
> rel="permissions"/> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/vnicprofiles"
> rel="vnicprofiles"/> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/networklabels"
> rel="networklabels"/>0falsevm id="4050"/> href="/ovirt-engine/api/datacenters/1d00d32b-abdc-43cd-b990-257aaf01d514"
> id="1d00d32b-abdc-43cd-b990-257aaf01d514"/>
>
> Below is the code that I am trying to do the same thing and I want to
> follow the vnicprofiles link to get to the actual data that I want to
> change:
> #!/usr/bin/env python
>
> import logging
> import time
> import string
> import sys
> import os
> import MySQLdb
>
> import ovirtsdk4 as sdk
> import ovirtsdk4.types as types
>
> #logging.basicConfig(level=logging.DEBUG, filename='/tmp/addhost.log')
>
> ### Variables to be used ###
> #NUMANODE = 3
> #MEM = 20
> GB = 1024 * 1024 * 1024
> #MEMORY = MEM * GB
> GB = 1024 * 1024 * 1024
> URL = 'https://host/ovirt-engine/api'
> CAFILE = '/etc/pki/ovirt-engine/ca.pem'
> USERNAME = 'admin@internal'
> PASSWORD = 'password'
> HOSTNAME = 'rvs06'
>
> connection = sdk.Connection(
> url=URL,
> username=USERNAME,
> password=PASSWORD,
> #ca_file='ca.pem',
> debug='True',
> insecure='True',
> #log=logging.getLogger(),
> )
>
> #dcs_service = connection.system_service().data_centers_service()
> #dc = dcs_service.list(search='cluster=%s-local' % 

[ovirt-users] Adding user to internalz with webadmin?

2019-02-25 Thread Juhani Rautiainen
Hi!

How do you add users to internalz with webadmin? I mean there is add a
button in Administration->Users which opens 'Add Users and
Groups'-window . Windows has  'Add'- and 'Add and close'- buttons in
the bottom. Just can't figure out how they work. Pushing either of
those Add-buttons just closes the window.

-Juhani
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y4GFUACEHEEHYPHCIXDKX3UGRHB5FD23/


[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread Игорь Казанцев
hosted-engine --vm-status  --== Host 1 status ==-- conf_on_shared_storage : TrueStatus up-to-date  : TrueHostname   : ovirt1Host ID    : 1Engine status  : {"health": "good", "vm": "up", "detail": "Up"}Score  : 3400stopped    : FalseLocal maintenance  : Falsecrc32  : 8710cfaelocal_conf_timestamp   : 1966901Host timestamp : 1966901Extra metadata (valid at timestamp):    metadata_parse_version=1    metadata_feature_version=1    timestamp=1966901 (Tue Feb 26 08:10:06 2019)    host-id=1    score=3400    vm_conf_refresh_time=1966901 (Tue Feb 26 08:10:06 2019)    conf_on_shared_storage=True    maintenance=False    state=EngineUp    stopped=False  --== Host 2 status ==-- conf_on_shared_storage : TrueStatus up-to-date  : TrueHostname   : ovirt2Host ID    : 2Engine status  : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}Score  : 3400stopped    : FalseLocal maintenance  : Falsecrc32  : 51eb3b47local_conf_timestamp   : 322387Host timestamp : 322387Extra metadata (valid at timestamp):    metadata_parse_version=1    metadata_feature_version=1    timestamp=322387 (Tue Feb 26 10:09:48 2019)    host-id=2    score=3400    vm_conf_refresh_time=322387 (Tue Feb 26 10:09:49 2019)    conf_on_shared_storage=True    maintenance=False    state=EngineDown    stopped=False 26.02.2019, 09:35, "Strahil" :What is the output of 'hosted-engine --vm-status' from the host that is hosting the HostedEngine VM?Best Regards,Strahil NikolovOn Feb 26, 2019 05:50, k...@intercom.pro wrote: The crown icon on the left of the second host is gray. When I try to migrate the engine, I get the error: Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during the startup EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found to migrate VM HostedEngine to. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHG7MBNCPYMSXC7PGYWIZ6WEDT2Y6LYU/  -- С уважением,Казанцев ИгорьООО Интерком8912476 ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4TSLVVQTG3WSMLNGNIH262CSOCXRB6ET/


[ovirt-users] Re: Python-SDK4- Issue following links

2019-02-25 Thread Don Dupuis
Joey

That WORKED just great. I am still trying to understand the
services/service stuff. I was trying something similar earlier, but I was
using
connection.system_service().vnic_profiles_service().vnic_profile_service(),
I understand now from your code on what is going on and why was going down
the wrong road.

Thanks again for your help

Don

On Mon, Feb 25, 2019 at 10:24 PM Joey Ma  wrote:

>
> On Tue, Feb 26, 2019 at 1:00 AM Don Dupuis  wrote:
>
>> Joey
>> I am still not quite getting it. I am trying the below code and where it
>> is commented out, I have tried different things, but I am no table to
>> update the name of the object that I have found.
>>
>> networks_service = connection.system_service().networks_service()
>> network = networks_service.list(
>> search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
>> print ("Network name is %s" % network.name)
>> print ("Network id is %s" % network.id)
>> vnics = connection.follow_link(network.vnic_profiles)
>> #vnicsprofile_service = connection.system_service().vnic_profile_service()
>> #vnicprofile_service = vnic_profiles_service.vnic_profile_service(
>> vnics.id)
>>
>
> Hi Don,
>
> The var `vnics` is actually a List, so the statement `vnics.id` would
> produce errors.
>
> The following codes could successfully update the name of a vnicprofile,
> probably meets your needs.
>
> ```python
> vnics = connection.follow_link(network.vnic_profiles)
>
> # Iterate the var `vnics` would be better.
> vnic_service =
> connection.system_service().vnic_profiles_service().profile_service(vnics[0].id)
> vnic_service.update(
>  types.VnicProfile(
>  name='the-new-name',
>  )
> )
> vnic = vnic_service.get()
> print('new name', vnic.name)
> ```
>
> If the above codes could not work as expected, please let me know.
>
> Regards,
> Joey
>
> for dev in vnics:
>> print ("Dev name is %s" % dev.name)
>> #vnicprofile_service.update(types.VnicProfile(
>> #   name='%s' % HOSTNAME,
>> #   ),
>> #)
>> connection.close()
>>
>> ./update-vnic.py
>> Network name is ovirtmgmt
>> Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
>> Dev name is ovirtmgmt
>>
>> Thanks
>> Don
>>
>> On Mon, Feb 25, 2019 at 12:06 AM Joey Ma  wrote:
>>
>>> Hi Don,
>>>
>>> Please using `network.vnic_profiles` instead of `network.vnicprofiles`
>>> as the parameter of  `connection.follow_link`.
>>>
>>> Regards,
>>> Joey
>>>
>>>
>>> On Mon, Feb 25, 2019 at 9:22 AM Don Dupuis  wrote:
>>>
 Hi

 I am trying to write some code to update the names of existing
 vnicprofiles in ovirt-4.2. The problem I am having is trying to follow the
 links to the vnicprofiles. Below is web info that I am trying to get:

 >>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4"
 id="740cae1f-c49f-4563-877a-5ce173e83be4">ovirtmgmtLOOKING>>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/permissions"
 rel="permissions"/ href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/vnicprofiles"
 rel="vnicprofiles"/ href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/networklabels"
 rel="networklabels"/>0falsevm>>> id="4050"/ href="/ovirt-engine/api/datacenters/1d00d32b-abdc-43cd-b990-257aaf01d514"
 id="1d00d32b-abdc-43cd-b990-257aaf01d514"/>

 Below is the code that I am trying to do the same thing and I want to
 follow the vnicprofiles link to get to the actual data that I want to
 change:
 #!/usr/bin/env python

 import logging
 import time
 import string
 import sys
 import os
 import MySQLdb

 import ovirtsdk4 as sdk
 import ovirtsdk4.types as types

 #logging.basicConfig(level=logging.DEBUG, filename='/tmp/addhost.log')

 ### Variables to be used ###
 #NUMANODE = 3
 #MEM = 20
 GB = 1024 * 1024 * 1024
 #MEMORY = MEM * GB
 GB = 1024 * 1024 * 1024
 URL = 'https://host/ovirt-engine/api'
 CAFILE = '/etc/pki/ovirt-engine/ca.pem'
 USERNAME = 'admin@internal'
 PASSWORD = 'password'
 HOSTNAME = 'rvs06'

 connection = sdk.Connection(
 url=URL,
 username=USERNAME,
 password=PASSWORD,
 #ca_file='ca.pem',
 debug='True',
 insecure='True',
 #log=logging.getLogger(),
 )

 #dcs_service = connection.system_service().data_centers_service()
 #dc = dcs_service.list(search='cluster=%s-local' % HOSTNAME)[0]
 #network = dcs_service.service(dc.id).networks_service()
 networks_service = connection.system_service().networks_service()
 network = networks_service.list(
 search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
 print ("Network name is %s" % network.name)
 print ("Network id is %s" % network.id)
 vnic = connection.follow_link(network.vnicprofiles)

 connection.close()

 Below is the output of my code:

 

[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread Strahil
What is the output of 'hosted-engine --vm-status' from the host that is hosting 
the HostedEngine VM?

Best Regards,
Strahil NikolovOn Feb 26, 2019 05:50, k...@intercom.pro wrote:
>
> The crown icon on the left of the second host is gray. When I try to migrate 
> the engine, I get the error: 
>
> Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during 
> the startup 
> EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found 
> to migrate VM HostedEngine to. 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHG7MBNCPYMSXC7PGYWIZ6WEDT2Y6LYU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DEJ5JEM2YGPQWFQLFWNOQZOFVJHTDV6Y/


[ovirt-users] Unrecognized message received error when ISO domain is put in Maintenance

2019-02-25 Thread Wood Peter
Hi all,

We are running oVirt 4.1.1 on engine and nodes. The ISO storage domain is
NFS. Everything has been working great for years.

Now I need to move the ISO and Export domains to a different storage server.
When I put the ISO domain in maintenance the Master domain and the Export
domain start flipping active-non active. The Events tab in the Web UI shows
this message scrolling every 30 sec. or so:

"Datacenter is being initialized. Please wait for initialization to
complete"

... but it never completes.

It also starts cycling the SPM assignment from one node to another.

As soon as I activate back the ISO domain everything calms down and no more
errors are recorded in the logs.

In engine.log I see this "Unrecognized message received" error:

2019-02-25 19:45:20,467-08 ERROR
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) []
Unable to process messages Unrecognized message received
2019-02-25 19:45:20,481-08 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler6) [76363b98] EVENT_ID:
IRS_BROKER_COMMAND_FAILURE(10,803), Correlation ID: null, Call Stack: null,
Custom E
vent ID: -1, Message: VDSM command GetStoragePoolInfoVDS failed:
Unrecognized message received
2019-02-25 19:45:20,481-08 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler6) [76363b98] ERROR,
org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand,
exception: VDSGenericExce
ption: VDSNetworkException: Unrecognized message received , log id: 1635c439
2019-02-25 19:45:20,481-08 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler6) [76363b98] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetwor
kException: Unrecognized message received

In vdsm.log on the nodes I see this:
2019-02-25 19:45:29,817-0800 ERROR (upgrade/b5b7a10) [storage.StoragePool]
Unhandled exception (utils:371)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in
wrapper
return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 180, in
run
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
79, in wrapper
return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 232, in _upgradePoolDomain
self._finalizePoolUpgradeIfNeeded()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line
77, in wrapper
raise SecureError("Secured object is not in safe state")
SecureError: Secured object is not in safe state

And I believe that after that error it moves the SPM to another host, gets
the same error and continues moving it to the next host and so on.

I really need to move the ISO and Export domains to a different storage
server.

Any idea what is causing this or how to fix it?

Appreciate any help.

Thanks,
-- Peter
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HEPO34M5ZMMCHVODJKB7VXNFKAZGF7JC/


[ovirt-users] Re: Python-SDK4- Issue following links

2019-02-25 Thread Joey Ma
On Tue, Feb 26, 2019 at 1:00 AM Don Dupuis  wrote:

> Joey
> I am still not quite getting it. I am trying the below code and where it
> is commented out, I have tried different things, but I am no table to
> update the name of the object that I have found.
>
> networks_service = connection.system_service().networks_service()
> network = networks_service.list(
> search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
> print ("Network name is %s" % network.name)
> print ("Network id is %s" % network.id)
> vnics = connection.follow_link(network.vnic_profiles)
> #vnicsprofile_service = connection.system_service().vnic_profile_service()
> #vnicprofile_service = vnic_profiles_service.vnic_profile_service(vnics.id
> )
>

Hi Don,

The var `vnics` is actually a List, so the statement `vnics.id` would
produce errors.

The following codes could successfully update the name of a vnicprofile,
probably meets your needs.

```python
vnics = connection.follow_link(network.vnic_profiles)

# Iterate the var `vnics` would be better.
vnic_service =
connection.system_service().vnic_profiles_service().profile_service(vnics[0].id)
vnic_service.update(
 types.VnicProfile(
 name='the-new-name',
 )
)
vnic = vnic_service.get()
print('new name', vnic.name)
```

If the above codes could not work as expected, please let me know.

Regards,
Joey

for dev in vnics:
> print ("Dev name is %s" % dev.name)
> #vnicprofile_service.update(types.VnicProfile(
> #   name='%s' % HOSTNAME,
> #   ),
> #)
> connection.close()
>
> ./update-vnic.py
> Network name is ovirtmgmt
> Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
> Dev name is ovirtmgmt
>
> Thanks
> Don
>
> On Mon, Feb 25, 2019 at 12:06 AM Joey Ma  wrote:
>
>> Hi Don,
>>
>> Please using `network.vnic_profiles` instead of `network.vnicprofiles` as
>> the parameter of  `connection.follow_link`.
>>
>> Regards,
>> Joey
>>
>>
>> On Mon, Feb 25, 2019 at 9:22 AM Don Dupuis  wrote:
>>
>>> Hi
>>>
>>> I am trying to write some code to update the names of existing
>>> vnicprofiles in ovirt-4.2. The problem I am having is trying to follow the
>>> links to the vnicprofiles. Below is web info that I am trying to get:
>>>
>>> >> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4"
>>> id="740cae1f-c49f-4563-877a-5ce173e83be4">ovirtmgmtLOOKING>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/permissions"
>>> rel="permissions"/>>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/vnicprofiles"
>>> rel="vnicprofiles"/>>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/networklabels"
>>> rel="networklabels"/>0falsevm>> id="4050"/>>> href="/ovirt-engine/api/datacenters/1d00d32b-abdc-43cd-b990-257aaf01d514"
>>> id="1d00d32b-abdc-43cd-b990-257aaf01d514"/>
>>>
>>> Below is the code that I am trying to do the same thing and I want to
>>> follow the vnicprofiles link to get to the actual data that I want to
>>> change:
>>> #!/usr/bin/env python
>>>
>>> import logging
>>> import time
>>> import string
>>> import sys
>>> import os
>>> import MySQLdb
>>>
>>> import ovirtsdk4 as sdk
>>> import ovirtsdk4.types as types
>>>
>>> #logging.basicConfig(level=logging.DEBUG, filename='/tmp/addhost.log')
>>>
>>> ### Variables to be used ###
>>> #NUMANODE = 3
>>> #MEM = 20
>>> GB = 1024 * 1024 * 1024
>>> #MEMORY = MEM * GB
>>> GB = 1024 * 1024 * 1024
>>> URL = 'https://host/ovirt-engine/api'
>>> CAFILE = '/etc/pki/ovirt-engine/ca.pem'
>>> USERNAME = 'admin@internal'
>>> PASSWORD = 'password'
>>> HOSTNAME = 'rvs06'
>>>
>>> connection = sdk.Connection(
>>> url=URL,
>>> username=USERNAME,
>>> password=PASSWORD,
>>> #ca_file='ca.pem',
>>> debug='True',
>>> insecure='True',
>>> #log=logging.getLogger(),
>>> )
>>>
>>> #dcs_service = connection.system_service().data_centers_service()
>>> #dc = dcs_service.list(search='cluster=%s-local' % HOSTNAME)[0]
>>> #network = dcs_service.service(dc.id).networks_service()
>>> networks_service = connection.system_service().networks_service()
>>> network = networks_service.list(
>>> search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
>>> print ("Network name is %s" % network.name)
>>> print ("Network id is %s" % network.id)
>>> vnic = connection.follow_link(network.vnicprofiles)
>>>
>>> connection.close()
>>>
>>> Below is the output of my code:
>>>
>>> ./update-vnic.py
>>> Network name is ovirtmgmt
>>> Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
>>> Traceback (most recent call last):
>>>   File "./update-vnic.py", line 46, in 
>>> vnic = connection.follow_link(network.vnicprofiles)
>>> AttributeError: 'Network' object has no attribute 'vnicprofiles'
>>>
>>> The network name and network id is correct. Any help would be
>>> appreciated on what I am missing or what I am doing wrong. The actual
>>> updating of the name with code isn't written yet as I can't get past this
>>> part.
>>>
>>> Thanks
>>>
>>> Don
>>> 

[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread kiv
The crown icon on the left of the second host is gray. When I try to migrate 
the engine, I get the error:

 Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during 
the startup
EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found 
to migrate VM HostedEngine to.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHG7MBNCPYMSXC7PGYWIZ6WEDT2Y6LYU/


[ovirt-users] /boot partition of oVirt node missing after reboot

2019-02-25 Thread andy . law
I've installed oVirt Node 4.2 on one of my servers. Recently I've upgraded the 
kernel version to 3.10.0-957.5.1.el7.x86_64. 
After such update, I've rebooted the server to take effect and then the /boot 
partition cannot be recognized. The server then enters emergency mode and I've 
tried to run the command blkid but I cannot find the uuid of /boot and 
/boot/efi.

Before reboot:
# blkid
/dev/mapper/onn-var_crash: UUID="2aa920d6-c2d2-4313-8df1-8be16a5718ae" 
TYPE="ext4"
/dev/mapper/3600508b1001cede21ec59eb753184259p3: 
UUID="Vy9K1K-e4uI-oM73-blSV-eU29-FnZP-3FIh4Y" TYPE="LVM2_member" 
PARTUUID="57cfd006-60d0-4826-8f58-6d11d4e1cb94"
/dev/mapper/3600508b1001cede21ec59eb753184259p1: SEC_TYPE="msdos" 
UUID="7408-0A3B" TYPE="vfat" PARTLABEL="EFI System Partition" 
PARTUUID="56cadb64-9099-4ec8-9c0c-94596c80b2a9"
/dev/mapper/3600508b1001cede21ec59eb753184259p2: 
UUID="3e920048-3b2e-4c09-b507-c9debe7d98f0" TYPE="ext4" 
PARTUUID="9ee53bdd-3051-443b-960f-9801d7245d33"
/dev/mapper/onn-ovirt--node--ng--4.2.7.1--0.20181219.0+1: 
UUID="111ed68a-a7e8-4b17-8add-5c8fd9035bbc" TYPE="ext4"
/dev/mapper/onn-swap: UUID="0fcbdcfe-92da-43d4-a773-ef6f7dac0556" TYPE="swap"
/dev/mapper/onn-var_log_audit: UUID="48531956-48cc-4faa-9dfc-e7baaab89b3b" 
TYPE="ext4"
/dev/mapper/onn-var_log: UUID="15c57452-0333-4a9a-a8e6-220155e819b3" TYPE="ext4"
/dev/mapper/onn-var: UUID="772613a5-be4f-4044-8b95-c846852f5421" TYPE="ext4"
/dev/mapper/onn-tmp: UUID="1ee263bf-9dcb-46e8-92f7-f21a8c87eeb5" TYPE="ext4"
/dev/mapper/onn-home: UUID="738d2e6f-d7e0-4d3a-acc0-6db1392a0222" TYPE="ext4"
/dev/sdc: UUID="e2pRkW-V1Vd-XxqK-5PBK-5Vtb-8Mgr-YjgB1M" TYPE="LVM2_member"
/dev/mapper/360060e80122ed80050402ed80a00: 
UUID="e2pRkW-V1Vd-XxqK-5PBK-5Vtb-8Mgr-YjgB1M" TYPE="LVM2_member"
/dev/sdd: UUID="WlBAsN-q2YR-vYGL-Syx4-HJ30-CbVk-v2Q7yt" TYPE="LVM2_member"
/dev/mapper/360060e80122ed80050402ed80a01: 
UUID="WlBAsN-q2YR-vYGL-Syx4-HJ30-CbVk-v2Q7yt" TYPE="LVM2_member"
/dev/sde: UUID="e2pRkW-V1Vd-XxqK-5PBK-5Vtb-8Mgr-YjgB1M" TYPE="LVM2_member"
/dev/sdf: UUID="WlBAsN-q2YR-vYGL-Syx4-HJ30-CbVk-v2Q7yt" TYPE="LVM2_member"
/dev/mapper/c0d0ef19--826b--44d7--a380--658adb915964-master: 
UUID="a409bf9f-63b6-42ac-8759-038f1bc3337f" TYPE="ext3"
/dev/mapper/e399fd7f--c884--4e3d--89cf--7d8ddae16058-master: 
UUID="887319bd-d3e4-4dfe-9f8c-3e4ee2498ccb" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/3600508b1001cede21ec59eb753184259: PTTYPE="gpt"
/dev/sdb: PTTYPE="gpt"
/dev/mapper/c0d0ef19--826b--44d7--a380--658adb915964-a72e9370--0649--4e6f--b10a--1015b6b2134f:
 PTTYPE="dos"


#cat /etc/fstab
/dev/onn/ovirt-node-ng-4.2.7.1-0.20181219.0+1 / ext4 defaults,discard 1 1
UUID=3e920048-3b2e-4c09-b507-c9debe7d98f0 /boot   ext4
defaults1 2
UUID=7408-0A3B  /boot/efi   vfat
umask=0077,shortname=winnt 0 0
/dev/mapper/onn-home /home ext4 defaults,discard 1 2
/dev/mapper/onn-tmp /tmp ext4 defaults,discard 1 2
/dev/mapper/onn-var /var ext4 defaults,discard 1 2
/dev/mapper/onn-var_log /var/log ext4 defaults,discard 1 2
/dev/mapper/onn-var_log_audit /var/log/audit ext4 defaults,discard 1 2
/dev/mapper/onn-swapswapswapdefaults0 0


Any ideas what caused this?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LTKVOO5EKKNN2MDGQATISF7C5VTZFJ37/


[ovirt-users] Re: Can't manually migrate VM's (4.3.0)

2019-02-25 Thread Ron Jerome
It's a 3 node cluster, each node has 84G RAM, and there are only two two
other VM's running, so there should be plenty of capacity.

Automatic migration works, if I put a host into Maintenance, the VM's will
migrate.

Ron

On Mon, Feb 25, 2019, 6:46 PM Greg Sheremeta,  wrote:

> Turns out it's a bad error message. It just means there are no hosts
> available to migrate to.
>
> Do you have other hosts up with capacity?
>
> Greg
>
>
> On Mon, Feb 25, 2019 at 3:01 PM Ron Jerome  wrote:
>
>> I've been running 4.3.0 for a few weeks now and just discovered that I
>> can't manually migrate VM's from the UI.  I get an error message saying:
>> "Could not fetch data needed for VM migrate operation"
>>
>> Sounds like
>> https://bugzilla.redhat.com/show_bug.cgi?format=multiple=1670701
>>
>> Ron.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OBUNZHUPVEDZ5YLTXI2CQEPBQGBZ2JT/
>>
>
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat NA
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZDZ6AJOFSRUEUDGEO3MLYRCIBSQA3AY7/


[ovirt-users] Re: Can't manually migrate VM's (4.3.0)

2019-02-25 Thread Greg Sheremeta
Turns out it's a bad error message. It just means there are no hosts
available to migrate to.

Do you have other hosts up with capacity?

Greg


On Mon, Feb 25, 2019 at 3:01 PM Ron Jerome  wrote:

> I've been running 4.3.0 for a few weeks now and just discovered that I
> can't manually migrate VM's from the UI.  I get an error message saying:
> "Could not fetch data needed for VM migrate operation"
>
> Sounds like
> https://bugzilla.redhat.com/show_bug.cgi?format=multiple=1670701
>
> Ron.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OBUNZHUPVEDZ5YLTXI2CQEPBQGBZ2JT/
>


-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat NA



gsher...@redhat.comIRC: gshereme

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DLTVBUP5JP2PVJZQW5E2GUG43D4CINPU/


[ovirt-users] Re: Stuck completing last step of 4.3 upgrade

2019-02-25 Thread Jason P. Thomas



On 2/25/19 3:49 AM, Sahina Bose wrote:

On Thu, Feb 21, 2019 at 11:11 PM Jason P. Thomas  wrote:

On 2/20/19 5:33 PM, Darrell Budic wrote:

I was just helping Tristam on #ovirt with a similar problem, we found that his two 
upgraded nodes were running multiple glusterfsd processes per brick (but not all 
bricks). His volume & brick files in /var/lib/gluster looked normal, but 
starting glusterd would often spawn extra fsd processes per brick, seemed random. 
Gluster bug? Maybe related to  
https://secure-web.cisco.com/1zutsPlj0TjKvDiGvxmw5PZZPUoEtpkcJqhpWGvx2-fJoEMzSTBS7hSNqb4ZmBoELYn-OnLEiRZHRpA5XY5mJr_7RfSDll1EaXkWR3M-r-rqGPXA4X2JDx5sZIgvjQ2RJeEYd_rZJAV0FE2Z7aTuyPvla0k6cqyL02QWOj-RjkRXgfrSECActBehc3ggEc95SMQ068Z6cbtEdIb09cLN6qDHuWz-vs9NqstdyJQQZ8n_BDEVOw5MURbMF54EWQto914O2Pq11buymdl_wDrA2SCi31sSraVfhD9r0DTFuNqd8ibxbYbdIeKtg87JFMy3EYj8zsOlZxtsNWlbz4UMo6q0BMTcHGfrQhY_sMTWL0HlKsbonv9m8bZQsoQ7LNrQzfkEZBQgHnyNRE1_97d2ZjfcQAtZJJsIGVN4ddEp82FrJd6zhSXnfrcLiaNDAEUBi/https%3A%2F%2Fbugzilla.redhat.com%2Fshow_bug.cgi%3Fid%3D1651246,
 but I’m helping debug this one second hand… Possibly related to the brick crashes? 
We wound up stopping glusterd, killing off all the fsds, restarting glusterd, and 
repeating until it only spawned one fsd per brick. Did that to each updated server, 
then restarted glusterd on the not-yet-updated server to get it talking to the 
right bricks. That seemed to get to a mostly stable gluster environment, but he’s 
still seeing 1-2 files listed as needing healing on the upgraded bricks (but not 
the 3.12 brick). Mainly the DIRECT_IO_TEST and one of the dom/ids files, but he can 
probably update that. Did manage to get his engine going again, waiting to see if 
he’s stable now.

Anyway, figured it was worth posting about so people could check for multiple 
brick processes (glusterfsd) if they hit this stability issue as well, maybe 
find common ground.

Note: also encountered 
https://secure-web.cisco.com/1eMDo5MMs_aJ6TOHBckwoG7rBeXsvPoHI01UBZ8YJ2GnI4ds5muAWuLwC7w0uUVSuhnic-rltGTzC_FGPYYigedn-1_Z90J0qkvaSCJNLr_gm2gj0ujAk5G31zF1tv-YHZSUx6PBm2tG8GpplIWiwN5N1TBqfjKEszVer6Yxrtkjl7ZPqnZlTBeGiNKBLa5zz9JFmvAhuXtFodfy6Dh-mIbFQZ1IdeIh6j37hhTfYLAhG_r-tsIFSIc8V9x_q3-PE7OA68lQ9w7dmFteOLHkhND8mvhmI1sWlV0iVg3jx9ll8a4mBSsng4Thf1lJGdBvT_vXxriv8LKHp1vh86r_u3gdbs65hWQKhhc7Z2zJJvh9yXw7lk7qF4n-hIlyIS2p4Yzgtn94ExpgvhQM1q9HVB7-SJ2YMhGJGfrAu3lFFesUMtqRPbY9TTUAXxMREwjth/https%3A%2F%2Fbugzilla.redhat.com%2Fshow_bug.cgi%3Fid%3D1348434
 trying to get his engine back up, restarting libvirtd let us get it going 
again. Maybe un-needed if he’d been able to complete his third node upgrades, 
but he got stuck before then, so...

   -Darrell

Stable is a relative term.  My unsynced entries total for each of my 4 volumes changes 
drastically (with the exception of the engine volume, it pretty much bounces between 1 
and 4).  The cluster has been "healing" for 18 hours or so and only the 
unupgraded HC node has healed bricks.  I did have the problem that some files/directories 
were owned by root:root.  These VMs did not boot until I changed ownership to 36:36.  
Even after 18 hours, there's anywhere from 20-386 entries in vol heal info for my 3 non 
engine bricks.  Overnight I had one brick on one volume go down on one HC node.  When I 
bounced glusterd, it brought up a new fsd process for that brick.  I killed the old one 
and now vol status reports the right pid on each of the nodes.  This is quite the 
debacle.  If I can provide any info that might help get this debacle moving in the right 
direction, let me know.

Can you provide the gluster brick logs and glusterd logs from the
servers (from /var/log/glusterfs/). Since you mention that heal seems
to be stuck, could you also provide the heal logs from
/var/log/glusterfs/glustershd.log
If you can log a bug with these logs, that would be great - please use
https://secure-web.cisco.com/1u1JXUFgLqfmecw2r1a0ZiPWiR0AlgZ1-8A3Ax1bpyCysNvH6JpfsdOiPfzxYx9Y8wB0zTlGHTNbWzR2Qf_Y3ElsXtJKdRfYllHpXeCSMSHQKq-jQzK83503i24c4jYADgwiPhWVUuk-3K9nVr_NTDrVKPu8KShG2UH9sFakcBsTMC6xTAaHTLJJxHH_PYCmjG9XECovJOX7_LYjOIJOd-npFm_fUlWtTwXick9NaOggfdhESof7KhbdleZWP36--TaXZ3a9c26rFkogI0kVuv-8Eex7blbGFBuoCyqabs0W1fczoluYJ30xx5kAMqM5tSb36QQHRtQbhEPFD1_x9uUqwE3CFm_W7towsYiSqeY3J32aAGIliiddohLhr5a8IpQYaeHHd6Wwp0RZun_zucnEtEq7uZY8QMMMrZLDh6iA0YLntokb0xRhh_dodk8dZ/https%3A%2F%2Fbugzilla.redhat.com%2Fenter_bug.cgi%3Fproduct%3DGlusterFS
 to log the
bug.
I've filed Bug 1682925 
 with the requested 
logs during the time frame I experienced issues.  Sorry for the delay, I 
was out of the office Friday and this morning.


Jason




Jason aka Tristam


On Feb 14, 2019, at 1:12 AM, Sahina Bose  wrote:

On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome  wrote:




Can you be more specific? What things did you see, and did you report bugs?


I've got this one: 

[ovirt-users] Re: Problem deploying self-hosted engine on ovirt 4.3.0

2019-02-25 Thread Strahil
Your /var/tmp is full. Clean it up a little bit and give it a try.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/434N5IJDLWNELMQQZWGV5WYYC2QPVFGJ/


[ovirt-users] Can't manually migrate VM's (4.3.0)

2019-02-25 Thread Ron Jerome
I've been running 4.3.0 for a few weeks now and just discovered that I can't 
manually migrate VM's from the UI.  I get an error message saying: "Could not 
fetch data needed for VM migrate operation"

Sounds like https://bugzilla.redhat.com/show_bug.cgi?format=multiple=1670701

Ron.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OBUNZHUPVEDZ5YLTXI2CQEPBQGBZ2JT/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
journalctl -u libvirtd.service :

févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopping
Virtualization daemon...
févr. 25 18:47:24 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Stopped
Virtualization daemon.
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Starting
Virtualization daemon...
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr systemd[1]: Started
Virtualization daemon.
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq[6310]: read
/etc/hosts - 4 addresses
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq[6310]: read
/var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]:
read /var/lib/libvirt/dnsmasq/default.hostsfile
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.739+: 13551: info : libvirt version: 4.5.0,
package: 10.el7_6.4 (CentOS BuildSystem ,
2019-01-29-17:31:22, x86-01.bsys.centos.org)
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.739+: 13551: info : hostname:
vs-inf-int-kvm-fr-301-210.hostics.fr
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.739+: 13551: error : virDirOpenInternal:2936 :
cannot open directory
'/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No
such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error :
storageDriverAutostartCallback:209 : internal error: Failed to autostart
storage pool '15023c8a-e3a7-4851-a97d-3b90996b423b': cannot open directory
'/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No
such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error : virDirOpenInternal:2936 :
cannot open directory '/var/tmp/localvmdRIozH': No such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error :
storageDriverAutostartCallback:209 : internal error: Failed to autostart
storage pool 'localvmdRIozH': cannot open directory
'/var/tmp/localvmdRIozH': No such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error : virDirOpenInternal:2936 :
cannot open directory
'/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No
such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error :
storageDriverAutostartCallback:209 : internal error: Failed to autostart
storage pool '15023c8a-e3a7-4851-a97d-3b90996b423b-1': cannot open
directory
'/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b': No
such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error : virDirOpenInternal:2936 :
cannot open directory '/var/tmp/localvmgmyYik': No such file or directory
févr. 25 18:47:34 vs-inf-int-kvm-fr-301-210.hostics.fr libvirtd[13535]:
2019-02-25 17:47:34.740+: 13551: error :
storageDriverAutostartCallback:209 : internal error: Failed to autostart
storage pool 'localvmgmyYik': cannot open directory
'/var/tmp/localvmgmyYik': No such file or directory


/var/log/libvirt/qemu/HostedEngineLocal.log :

2019-02-25 17:50:08.694+: starting up libvirt version: 4.5.0, package:
10.el7_6.4 (CentOS BuildSystem ,
2019-01-29-17:31:22, x86-01.bsys.centos.org), qemu version:
2.12.0qemu-kvm-ev-2.12.0-18.el7_6.3.1, kernel: 3.10.0-957.5.1.el7.x86_64,
hostname: vs-inf-int-kvm-fr-301-210.hostics.fr
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
guest=HostedEngineLocal,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
-machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
4,sockets=4,cores=1,threads=1 -uuid 8ba608c8-b721-4b5b-b839-b62f5e919814
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=27,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
menu=off,strict=on -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/tmp/localvmlF5yTM/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/var/tmp/localvmlF5yTM/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
-device 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 7:15 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> No, as indicated previously, still :
>
> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>  Expiry Time  MAC addressProtocol  IP address
>   HostnameClient ID or DUID
>
> ---
>
> [root@vs-inf-int-kvm-fr-301-210 ~]#
>
>
> I did not see any relevant log on the HE vm. Is there something I should
> look for there?
>

This smells really bad: I'd suggest to check /var/log/messages
and /var/log/libvirt/qemu/HostedEngineLocal.log for libvirt errors;
if nothing is there can I ask you to try reexecuting with libvirt debug
logs (edit /etc/libvirt/libvirtd.conf).

Honestly I'm not able to reproduce it on my side.


>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> I still can't connect with VNC remotely but locally with X forwarding it
>>> works.
>>> However my connection has too high latency for that to be usable (I'm in
>>> Japan, my hosts in France, ~250 ms ping)
>>>
>>> But I could see that the VM is booted!
>>>
>>> and in Hosts logs there is :
>>>
>>> févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]:
>>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>>> chdir=None stdin=None
>>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>>> dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6
>>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>>> dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
>>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>>> dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
>>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>>> dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
>>> vs-inf-int-ovt-fr-301-210
>>> févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]:
>>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>>> chdir=None stdin=None
>>> févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]:
>>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>>> chdir=None stdin=None
>>> févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]:
>>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>>> chdir=None stdin=None
>>> 
>>>
>>> ssh to the vm works too :
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14
>>> The authenticity of host '192.168.122.14 (192.168.122.14)' can't be
>>> established.
>>> ECDSA key fingerprint is
>>> SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ.
>>> ECDSA key fingerprint is
>>> MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb.
>>> Are you sure you want to continue connecting (yes/no)? yes
>>> Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known
>>> hosts.
>>> root@192.168.122.14's password:
>>> [root@vs-inf-int-ovt-fr-301-210 ~]#
>>>
>>>
>>> But the test that the playbook tries still fails with empty result :
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>>>  Expiry Time  MAC addressProtocol  IP address
>>> HostnameClient ID or DUID
>>>
>>> ---
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]#
>>>
>>>
>> This smells like a bug to me:
>> and nothing at all in the output of
>> virsh -r net-dhcp-leases default
>>
>> ?
>>
>>
>>>
>>>
>>>
>>> Guillaume Pavese
>>> Ingénieur Système et Réseau
>>> Interactiv-Group
>>>
>>>
>>> On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi 
>>> wrote:
>>>


 On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese <
 guillaume.pav...@interactiv-group.com> wrote:

> I did that but no success yet.
>
> I see that "Get local VM IP" task tries the following :
>
> virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk
> '{ print $5 }' | cut -f1 -d'/'
>
>
> However while 

[ovirt-users] Re: oVirt Node install failed

2019-02-25 Thread Edward Berger
If you haven't "installed" or "reinstalled" the second host without
purposely selecting "DEPLOY" under hosted-engine actions,
it will not be able to run the hosted-engine VM.
A quick way to tell if you did is to look at the hosts view and look for
the "crowns" on the left like this attached pic example.

On Sun, Feb 24, 2019 at 11:27 PM  wrote:

> Thanks for the answer.
>
> Now I have two hosts 4.2.6 and 4.2.8, and the engine 4.2.6. VMs migrate
> between these hosts without problems. But the VM with the engine to migrate
> to host 4.2.8 refuses - he say:
>
> No available Host to migrate to.
>
> Since it cannot migrate, there is no way to put the host in maintenance
> mode. And there is no possibility of upgrade.How to find out why? Install
> Host 4.2.6?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECVQUALTXCX2VE63T5K4AMYO2UID2HMM/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWQBUYOUB3PRJYZNY6RQQBYBSFPUCQVM/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
No, as indicated previously, still :

[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
 Expiry Time  MAC addressProtocol  IP address
  HostnameClient ID or DUID
---

[root@vs-inf-int-kvm-fr-301-210 ~]#


I did not see any relevant log on the HE vm. Is there something I should
look for there?


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Tue, Feb 26, 2019 at 3:12 AM Simone Tiraboschi 
wrote:

>
>
> On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> I still can't connect with VNC remotely but locally with X forwarding it
>> works.
>> However my connection has too high latency for that to be usable (I'm in
>> Japan, my hosts in France, ~250 ms ping)
>>
>> But I could see that the VM is booted!
>>
>> and in Hosts logs there is :
>>
>> févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]:
>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>> chdir=None stdin=None
>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>> dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6
>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>> dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>> dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
>> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
>> dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
>> vs-inf-int-ovt-fr-301-210
>> févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]:
>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>> chdir=None stdin=None
>> févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]:
>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>> chdir=None stdin=None
>> févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]:
>> ansible-command Invoked with warn=True executable=None _uses_shell=True
>> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
>> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
>> chdir=None stdin=None
>> 
>>
>> ssh to the vm works too :
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14
>> The authenticity of host '192.168.122.14 (192.168.122.14)' can't be
>> established.
>> ECDSA key fingerprint is
>> SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ.
>> ECDSA key fingerprint is
>> MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb.
>> Are you sure you want to continue connecting (yes/no)? yes
>> Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known
>> hosts.
>> root@192.168.122.14's password:
>> [root@vs-inf-int-ovt-fr-301-210 ~]#
>>
>>
>> But the test that the playbook tries still fails with empty result :
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>>  Expiry Time  MAC addressProtocol  IP address
>> HostnameClient ID or DUID
>>
>> ---
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]#
>>
>>
> This smells like a bug to me:
> and nothing at all in the output of
> virsh -r net-dhcp-leases default
>
> ?
>
>
>>
>>
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese <
>>> guillaume.pav...@interactiv-group.com> wrote:
>>>
 I did that but no success yet.

 I see that "Get local VM IP" task tries the following :

 virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk
 '{ print $5 }' | cut -f1 -d'/'


 However while the task is running, and vm running in qemu, "virsh -r
 net-dhcp-leases default" never returns anything :

>>>
>>> Yes, I think that libvirt will never provide a DHCP lease since the
>>> appliance OS never correctly complete the boot process.
>>> I'd suggest to connect to the running VM via vnc DURING the boot process
>>> and check what's wrong.
>>>
>>>
 [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
  Expiry Time  MAC addressProtocol  IP address
   HostnameClient 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 7:04 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> I still can't connect with VNC remotely but locally with X forwarding it
> works.
> However my connection has too high latency for that to be usable (I'm in
> Japan, my hosts in France, ~250 ms ping)
>
> But I could see that the VM is booted!
>
> and in Hosts logs there is :
>
> févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]:
> ansible-command Invoked with warn=True executable=None _uses_shell=True
> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
> chdir=None stdin=None
> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
> dnsmasq-dhcp[6310]: DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6
> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
> dnsmasq-dhcp[6310]: DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
> dnsmasq-dhcp[6310]: DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
> févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr
> dnsmasq-dhcp[6310]: DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
> vs-inf-int-ovt-fr-301-210
> févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]:
> ansible-command Invoked with warn=True executable=None _uses_shell=True
> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
> chdir=None stdin=None
> févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]:
> ansible-command Invoked with warn=True executable=None _uses_shell=True
> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
> chdir=None stdin=None
> févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]:
> ansible-command Invoked with warn=True executable=None _uses_shell=True
> _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
> awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
> chdir=None stdin=None
> 
>
> ssh to the vm works too :
>
> [root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14
> The authenticity of host '192.168.122.14 (192.168.122.14)' can't be
> established.
> ECDSA key fingerprint is
> SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ.
> ECDSA key fingerprint is
> MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb.
> Are you sure you want to continue connecting (yes/no)? yes
> Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known
> hosts.
> root@192.168.122.14's password:
> [root@vs-inf-int-ovt-fr-301-210 ~]#
>
>
> But the test that the playbook tries still fails with empty result :
>
> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>  Expiry Time  MAC addressProtocol  IP address
>   HostnameClient ID or DUID
>
> ---
>
> [root@vs-inf-int-kvm-fr-301-210 ~]#
>
>
This smells like a bug to me:
and nothing at all in the output of
virsh -r net-dhcp-leases default

?


>
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> I did that but no success yet.
>>>
>>> I see that "Get local VM IP" task tries the following :
>>>
>>> virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{
>>> print $5 }' | cut -f1 -d'/'
>>>
>>>
>>> However while the task is running, and vm running in qemu, "virsh -r
>>> net-dhcp-leases default" never returns anything :
>>>
>>
>> Yes, I think that libvirt will never provide a DHCP lease since the
>> appliance OS never correctly complete the boot process.
>> I'd suggest to connect to the running VM via vnc DURING the boot process
>> and check what's wrong.
>>
>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>>>  Expiry Time  MAC addressProtocol  IP address
>>> HostnameClient ID or DUID
>>>
>>> ---
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]#
>>>
>>>
>>>
>>>
>>> Guillaume Pavese
>>> Ingénieur Système et Réseau
>>> Interactiv-Group
>>>
>>>
>>> On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi 
>>> wrote:
>>>
 OK, try this:
 temporary
 edit 
 /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml
 around line 120
 and edit tasks "Get local VM IP"
 changing from "retries: 50" to  "retries: 500" so that you have more
 time to debug it



 On Mon, Feb 25, 2019 at 4:20 PM 

[ovirt-users] Re: running engine-setup against postgresql running on non-default port

2019-02-25 Thread ivan . kabaivanov
Thanks for the tip, it worked!  As soon as I replaced localhost with the FQDN 
of the host, it just worked.

Thanks!
IvanK.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VMMDEUSTRH2RL67DBYE3MKUD3ELQPVRU/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
I still can't connect with VNC remotely but locally with X forwarding it
works.
However my connection has too high latency for that to be usable (I'm in
Japan, my hosts in France, ~250 ms ping)

But I could see that the VM is booted!

and in Hosts logs there is :

févr. 25 18:51:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14719]:
ansible-command Invoked with warn=True executable=None _uses_shell=True
_raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
chdir=None stdin=None
févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]:
DHCPDISCOVER(virbr0) 00:16:3e:1d:4b:b6
févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]:
DHCPOFFER(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]:
DHCPREQUEST(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6
févr. 25 18:51:30 vs-inf-int-kvm-fr-301-210.hostics.fr dnsmasq-dhcp[6310]:
DHCPACK(virbr0) 192.168.122.14 00:16:3e:1d:4b:b6 vs-inf-int-ovt-fr-301-210
févr. 25 18:51:42 vs-inf-int-kvm-fr-301-210.hostics.fr python[14757]:
ansible-command Invoked with warn=True executable=None _uses_shell=True
_raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
chdir=None stdin=None
févr. 25 18:52:12 vs-inf-int-kvm-fr-301-210.hostics.fr python[14789]:
ansible-command Invoked with warn=True executable=None _uses_shell=True
_raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
chdir=None stdin=None
févr. 25 18:52:43 vs-inf-int-kvm-fr-301-210.hostics.fr python[14818]:
ansible-command Invoked with warn=True executable=None _uses_shell=True
_raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:1d:4b:b6 |
awk '{ print $5 }' | cut -f1 -d'/' removes=None argv=None creates=None
chdir=None stdin=None


ssh to the vm works too :

[root@vs-inf-int-kvm-fr-301-210 ~]# ssh root@192.168.122.14
The authenticity of host '192.168.122.14 (192.168.122.14)' can't be
established.
ECDSA key fingerprint is SHA256:+/pUzTGVA4kCyICb7XgqrxWYYkqzmDjVmdAahiBFgOQ.
ECDSA key fingerprint is
MD5:4b:ef:ff:4a:7c:1a:af:c2:af:4a:0f:14:a3:c5:31:fb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.122.14' (ECDSA) to the list of known
hosts.
root@192.168.122.14's password:
[root@vs-inf-int-ovt-fr-301-210 ~]#


But the test that the playbook tries still fails with empty result :

[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
 Expiry Time  MAC addressProtocol  IP address
  HostnameClient ID or DUID
---

[root@vs-inf-int-kvm-fr-301-210 ~]#




Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Tue, Feb 26, 2019 at 1:54 AM Simone Tiraboschi 
wrote:

>
>
> On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> I did that but no success yet.
>>
>> I see that "Get local VM IP" task tries the following :
>>
>> virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{
>> print $5 }' | cut -f1 -d'/'
>>
>>
>> However while the task is running, and vm running in qemu, "virsh -r
>> net-dhcp-leases default" never returns anything :
>>
>
> Yes, I think that libvirt will never provide a DHCP lease since the
> appliance OS never correctly complete the boot process.
> I'd suggest to connect to the running VM via vnc DURING the boot process
> and check what's wrong.
>
>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>>  Expiry Time  MAC addressProtocol  IP address
>> HostnameClient ID or DUID
>>
>> ---
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]#
>>
>>
>>
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi 
>> wrote:
>>
>>> OK, try this:
>>> temporary
>>> edit 
>>> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml
>>> around line 120
>>> and edit tasks "Get local VM IP"
>>> changing from "retries: 50" to  "retries: 500" so that you have more
>>> time to debug it
>>>
>>>
>>>
>>> On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese <
>>> guillaume.pav...@interactiv-group.com> wrote:
>>>
 I retried after killing the remaining qemu process and
 doing ovirt-hosted-engine-cleanup
 The new attempt failed again at the same step. Then after it fails, it
 cleans the temporary files (and vm disk) but *qemu still runs!* :

 [ INFO  ] TASK [ovirt.hosted_engine_setup : Get local VM IP]


[ovirt-users] Re: Ovirt Glusterfs

2019-02-25 Thread Strahil
Hi Sahina,
Thanks for your reply.

Let me share my test results with gluster v3 .
I have a 3-node hyperconverged setup with 1 Gbit/s network and SSD (sata-based) 
for LVM caching.
Testing the bricks  showed higher than the network performance.
1. Tested ovirt 4.2.7/4.2.8 with fuse mounts. Using 'dd if=/dev/zero 
of=/default/gluster/Mount point/from/ovirt bs=1M count=5000'.
Results: 56MB/s directly on the mount point, 20+-2 MB/s from a VM.
Reads on fuse mount point ->  500+ MB/s
Disabling sharing increased performance from the fuse mount point, nothing 
beneficial from a VM.

Converting the bricks of a volume to 'tmpfs' does not bring any performance 
gain for FUSE mount.

2. Tested ovirt 4.2.7/4.2.8 with gfapi - performance in VM -> approx 30 MB/s

3. Gluster native NFS (now deprecated) on ovirt 4.2.7/4.2.8 -> 120MB/s on mount 
point, 100+ MB/s in VM

My current setup:
Storhaug + ctdb + nfs-ganesha (ovirt 4.2.7/4.2.8)  -> 80+-2 MB/s in the VM. 
Reads are around the same speed.

Sadly, I didn't have the time to test performance  on gluster v5 (ovirt 4.3.0)  
, but I haven't noticed any performance gain for the engine.

My suspicion with FUSE is that when a gluster node is also playing the role of 
a client -> it still uses network bandwidth to communicate to itself, but I 
could be wrong.
According to some people on the gluster lists, the FUSE performance is expected 
,but my tests with disabled sharing shows better performance.

Most of the time 'gtop' does not show any spikes and iftop shows that network 
usage is not going over 500 Mbit/s

As I hit some issues with the deployment  on  4.2.7 , I decided to stop my 
tests for now.

Best Regards,
Strahil Nikolov
On Feb 25, 2019 09:17, Sahina Bose  wrote:
>
> The options set on the gluster volume are tuned for data consistency and 
> reliability.
>
> Some of the changes that you can try
> 1. use gfapi - however this will not provide you HA if the server used to 
> access the gluster volume is down. (the backup-volfile-servers are not used 
> in case of gfapi). You can change this using the engine-config tool for your 
> cluster level.
> 2. Change the remote-dio:enable to turn on client side brick caching. Ensure 
> that you have a power backup in place, so that you don't end up with data 
> loss in case server goes down before data is flushed.
>
> If you're seeing issues with a particular version of glusterfs, please 
> provide gluster profile output for us to help identify bottleneck.  (check 
> https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Workload/
>  on how to do this)
>
> On Fri, Feb 22, 2019 at 1:39 PM Sandro Bonazzola  wrote:
>>
>> Adding Sahina
>>
>> Il giorno ven 22 feb 2019 alle ore 06:51 Strahil  ha 
>> scritto:
>>>
>>> I have done some testing and it seems that storhaug + ctdb + nfs-ganesha is 
>>> showing decent performance in a 3 node  hyperconverged setup.
>>> Fuse mounts are hitting some kind of limit when mounting gluster -3.12.15  
>>> volumes.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7U5J4KYDJJS4W3BE2KEIR67NU3532XGY/
>>
>>
>>
>> -- 
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA
>>
>> sbona...@redhat.com   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JFMZUW7BKOORQ5HGSWPF4UYWXYB3ZH7O/


[ovirt-users] Re: HC : JBOD or RAID5/6 for NVME SSD drives?

2019-02-25 Thread Darrell Budic
I do similar with ZFS. In fact, I have a mix of large multi-drive ZFS volumes 
as single bricks, and a few SSDs with xfs as single bricks in other volumes, 
based on use. 

From what I’ve gathered watching the lists for a while, some people with lots 
of single bricks (drives) per node encounter higher heal times than people with 
large single volume bricks (mdadm, hardware raid, ZFS…) encounter better 
healing but maybe suffer a small performance penalty. Seems people like to raid 
their spinning disks and use SSDs or NVMes as single drive bricks in most cases.

Obviously your hardware and use case will drive it, but with NMVes, I’d be 
tempted to use them as single bricks. Raid 1 with them would let you bail one 
and not have to heal gluster, so that would be a bonus, and might get you more 
IOPS to boot. I’d do it if I could afford it ;) The ultimate answer is to test 
it in both configs, including testing healing across them and see what works 
best for you.

> On Feb 25, 2019, at 6:35 AM, Guillaume Pavese 
>  wrote:
> 
> Thanks Jayme,
> 
> We currently use H730 PERC cards on our test cluster but we are not set on 
> anything yet for the production cluster.
> We are indeed worried about losing a drive in JBOD mode. Would setting up a 
> RAID1 of NVME drives with mdadm, and then use that as the JBOD drive for the 
> volume, be a *good* idea? Is that even possible/ something that people do?
> 
> 
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
> 
> 
> On Sat, Feb 23, 2019 at 2:51 AM Jayme  > wrote:
> Personally I feel like raid on top of GlusterFS is too wasteful.  It would 
> give you a few advantages such as being able to replace a failed drive at 
> raid level vs replacing bricks with Gluster.  
> 
> In my production HCI setup I have three Dell hosts each with two 2Tb SSDs in 
> JBOD.  I find this setup works well for me, but I have not yet run in to any 
> drive failure scenarios. 
> 
> What Perc card do you have in the dell machines?   Jbod is tough with most 
> Perc cards, in many cases to do Jbod you have to fake it using individual 
> raid 0 for each drive.  Only some perc controllers allow true jbod 
> passthrough. 
> 
> On Fri, Feb 22, 2019 at 12:30 PM Guillaume Pavese 
>  > wrote:
> Hi,
> 
> We have been evaluating oVirt HyperConverged for 9 month now with a test 
> cluster of 3 DELL Hosts with Hardware RAID5 on PERC card. 
> We were not impressed with the performance...
> No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a ram 
> device. Perf were almost unchanged.
> 
> It seems that LV Cache is its own source of bugs and problems anyway, so we 
> are thinking going for full NVME drives when buying the production cluster.
> 
> What would the recommandation be in that case, JBOD or RAID?
> 
> Thanks
> 
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
> ___
> Users mailing list -- users@ovirt.org 
> To unsubscribe send an email to users-le...@ovirt.org 
> 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
> 
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/ 
> 
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBPT2RMEPWCXBTJUU3LV3JUD/
>  
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEVWLTZTSKX3AVVUXO46DD3U7DEUNUXE/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASUDLMRN3GRTUWAUTTJYPKGYLYNF5KPS/


[ovirt-users] Re: Python-SDK4- Issue following links

2019-02-25 Thread Don Dupuis
Joey
I am still not quite getting it. I am trying the below code and where it is
commented out, I have tried different things, but I am no table to update
the name of the object that I have found.

networks_service = connection.system_service().networks_service()
network = networks_service.list(
search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
print ("Network name is %s" % network.name)
print ("Network id is %s" % network.id)
vnics = connection.follow_link(network.vnic_profiles)
#vnicsprofile_service = connection.system_service().vnic_profile_service()
#vnicprofile_service = vnic_profiles_service.vnic_profile_service(vnics.id)
for dev in vnics:
print ("Dev name is %s" % dev.name)
#vnicprofile_service.update(types.VnicProfile(
#   name='%s' % HOSTNAME,
#   ),
#)
connection.close()

./update-vnic.py
Network name is ovirtmgmt
Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
Dev name is ovirtmgmt

Thanks
Don

On Mon, Feb 25, 2019 at 12:06 AM Joey Ma  wrote:

> Hi Don,
>
> Please using `network.vnic_profiles` instead of `network.vnicprofiles` as
> the parameter of  `connection.follow_link`.
>
> Regards,
> Joey
>
>
> On Mon, Feb 25, 2019 at 9:22 AM Don Dupuis  wrote:
>
>> Hi
>>
>> I am trying to write some code to update the names of existing
>> vnicprofiles in ovirt-4.2. The problem I am having is trying to follow the
>> links to the vnicprofiles. Below is web info that I am trying to get:
>>
>> > href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4"
>> id="740cae1f-c49f-4563-877a-5ce173e83be4">ovirtmgmtLOOKING> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/permissions"
>> rel="permissions"/>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/vnicprofiles"
>> rel="vnicprofiles"/>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/networklabels"
>> rel="networklabels"/>0falsevm> id="4050"/>> href="/ovirt-engine/api/datacenters/1d00d32b-abdc-43cd-b990-257aaf01d514"
>> id="1d00d32b-abdc-43cd-b990-257aaf01d514"/>
>>
>> Below is the code that I am trying to do the same thing and I want to
>> follow the vnicprofiles link to get to the actual data that I want to
>> change:
>> #!/usr/bin/env python
>>
>> import logging
>> import time
>> import string
>> import sys
>> import os
>> import MySQLdb
>>
>> import ovirtsdk4 as sdk
>> import ovirtsdk4.types as types
>>
>> #logging.basicConfig(level=logging.DEBUG, filename='/tmp/addhost.log')
>>
>> ### Variables to be used ###
>> #NUMANODE = 3
>> #MEM = 20
>> GB = 1024 * 1024 * 1024
>> #MEMORY = MEM * GB
>> GB = 1024 * 1024 * 1024
>> URL = 'https://host/ovirt-engine/api'
>> CAFILE = '/etc/pki/ovirt-engine/ca.pem'
>> USERNAME = 'admin@internal'
>> PASSWORD = 'password'
>> HOSTNAME = 'rvs06'
>>
>> connection = sdk.Connection(
>> url=URL,
>> username=USERNAME,
>> password=PASSWORD,
>> #ca_file='ca.pem',
>> debug='True',
>> insecure='True',
>> #log=logging.getLogger(),
>> )
>>
>> #dcs_service = connection.system_service().data_centers_service()
>> #dc = dcs_service.list(search='cluster=%s-local' % HOSTNAME)[0]
>> #network = dcs_service.service(dc.id).networks_service()
>> networks_service = connection.system_service().networks_service()
>> network = networks_service.list(
>> search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
>> print ("Network name is %s" % network.name)
>> print ("Network id is %s" % network.id)
>> vnic = connection.follow_link(network.vnicprofiles)
>>
>> connection.close()
>>
>> Below is the output of my code:
>>
>> ./update-vnic.py
>> Network name is ovirtmgmt
>> Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
>> Traceback (most recent call last):
>>   File "./update-vnic.py", line 46, in 
>> vnic = connection.follow_link(network.vnicprofiles)
>> AttributeError: 'Network' object has no attribute 'vnicprofiles'
>>
>> The network name and network id is correct. Any help would be appreciated
>> on what I am missing or what I am doing wrong. The actual updating of the
>> name with code isn't written yet as I can't get past this part.
>>
>> Thanks
>>
>> Don
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PRV7MA2X3IS5WSXEEYAY54PPXFIMNRM4/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/365DOLGJQ2OR43QESXLQOQKESAL4YQSB/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 5:50 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> I did that but no success yet.
>
> I see that "Get local VM IP" task tries the following :
>
> virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{
> print $5 }' | cut -f1 -d'/'
>
>
> However while the task is running, and vm running in qemu, "virsh -r
> net-dhcp-leases default" never returns anything :
>

Yes, I think that libvirt will never provide a DHCP lease since the
appliance OS never correctly complete the boot process.
I'd suggest to connect to the running VM via vnc DURING the boot process
and check what's wrong.


> [root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
>  Expiry Time  MAC addressProtocol  IP address
>   HostnameClient ID or DUID
>
> ---
>
> [root@vs-inf-int-kvm-fr-301-210 ~]#
>
>
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi 
> wrote:
>
>> OK, try this:
>> temporary
>> edit 
>> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml
>> around line 120
>> and edit tasks "Get local VM IP"
>> changing from "retries: 50" to  "retries: 500" so that you have more time
>> to debug it
>>
>>
>>
>> On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> I retried after killing the remaining qemu process and
>>> doing ovirt-hosted-engine-cleanup
>>> The new attempt failed again at the same step. Then after it fails, it
>>> cleans the temporary files (and vm disk) but *qemu still runs!* :
>>>
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
>>>
>>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
>>> true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9
>>> | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end":
>>> "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25
>>> 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "",
>>> "stdout_lines": []}
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
>>> [ INFO  ] ok: [localhost]
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
>>> [ INFO  ] changed: [localhost]
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
>>> /etc/hosts for the local VM]
>>> [ INFO  ] ok: [localhost]
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
>>> failure]
>>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
>>> system may not be provisioned according to the playbook results: please
>>> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
>>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
>>> ansible-playbook
>>> [ INFO  ] Stage: Clean up
>>> [ INFO  ] Cleaning temporary resources
>>> ...
>>>
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
>>> [ INFO  ] ok: [localhost]
>>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
>>> /etc/hosts for the local VM]
>>> [ INFO  ] ok: [localhost]
>>> [ INFO  ] Generating answer file
>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf'
>>> [ INFO  ] Stage: Pre-termination
>>> [ INFO  ] Stage: Termination
>>> [ ERROR ] Hosted Engine deployment failed: please check the logs for the
>>> issue, fix accordingly or re-deploy from scratch.
>>>
>>>
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu
>>> root  4021  0.0  0.0  24844  1788 ?Ss   févr.22   0:00
>>> /usr/bin/qemu-ga --method=virtio-serial
>>> --path=/dev/virtio-ports/org.qemu.guest_agent.0
>>> --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status
>>> -F/etc/qemu-ga/fsfreeze-hook
>>> qemu 26463 22.9  4.8 17684512 1088844 ?Sl   16:01   3:09
>>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
>>> -object
>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
>>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
>>> 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e
>>> -no-user-config -nodefaults -chardev
>>> socket,id=charmonitor,fd=27,server,nowait -mon
>>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
>>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
>>> menu=off,strict=on -device
>>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
>>> file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
>>> 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
I did that but no success yet.

I see that "Get local VM IP" task tries the following :

virsh -r net-dhcp-leases default | grep -i {{ he_vm_mac_addr }} | awk '{
print $5 }' | cut -f1 -d'/'


However while the task is running, and vm running in qemu, "virsh -r
net-dhcp-leases default" never returns anything :

[root@vs-inf-int-kvm-fr-301-210 ~]# virsh -r net-dhcp-leases default
 Expiry Time  MAC addressProtocol  IP address
  HostnameClient ID or DUID
---

[root@vs-inf-int-kvm-fr-301-210 ~]#




Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Tue, Feb 26, 2019 at 12:33 AM Simone Tiraboschi 
wrote:

> OK, try this:
> temporary
> edit 
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml
> around line 120
> and edit tasks "Get local VM IP"
> changing from "retries: 50" to  "retries: 500" so that you have more time
> to debug it
>
>
>
> On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> I retried after killing the remaining qemu process and
>> doing ovirt-hosted-engine-cleanup
>> The new attempt failed again at the same step. Then after it fails, it
>> cleans the temporary files (and vm disk) but *qemu still runs!* :
>>
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
>>
>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
>> true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9
>> | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end":
>> "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25
>> 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "",
>> "stdout_lines": []}
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
>> [ INFO  ] changed: [localhost]
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
>> /etc/hosts for the local VM]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
>> failure]
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
>> system may not be provisioned according to the playbook results: please
>> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
>> ansible-playbook
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Cleaning temporary resources
>> ...
>>
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
>> /etc/hosts for the local VM]
>> [ INFO  ] ok: [localhost]
>> [ INFO  ] Generating answer file
>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: please check the logs for the
>> issue, fix accordingly or re-deploy from scratch.
>>
>>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu
>> root  4021  0.0  0.0  24844  1788 ?Ss   févr.22   0:00
>> /usr/bin/qemu-ga --method=virtio-serial
>> --path=/dev/virtio-ports/org.qemu.guest_agent.0
>> --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status
>> -F/etc/qemu-ga/fsfreeze-hook
>> qemu 26463 22.9  4.8 17684512 1088844 ?Sl   16:01   3:09
>> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
>> -object
>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
>> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
>> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
>> 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e
>> -no-user-config -nodefaults -chardev
>> socket,id=charmonitor,fd=27,server,nowait -mon
>> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
>> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
>> menu=off,strict=on -device
>> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
>> file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>> -drive
>> file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
>> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
>> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3
>> -chardev pty,id=charserial0 

[ovirt-users] Re: VM poor iops

2019-02-25 Thread Leo David
Hi,
Is the performance.strict-o-direct=on a mandatory option to avoid data
inconsistency, although it has a pretty big impact on volume iops
performance?
Thank you !




On Fri, Sep 14, 2018, 13:03 Paolo Margara  wrote:

> Hi,
>
> but performance.strict-o-direct is not one of the option enabled by
> gdeploy during installation because it's supposed to give some sort of
> benefit?
>
>
> Paolo
>
> Il 14/09/2018 11:34, Leo David ha scritto:
>
> performance.strict-o-direct:  on
> This was the bloody option that created the botleneck ! It was ON.
> So now i get an average of 17k random writes,  which is not bad at all.
> Below,  the volume options that worked for me:
>
> performance.strict-write-ordering: off
> performance.strict-o-direct: off
> server.event-threads: 4
> client.event-threads: 4
> performance.read-ahead: off
> network.ping-timeout: 30
> performance.quick-read: off
> cluster.eager-lock: enable
> performance.stat-prefetch: on
> performance.low-prio-threads: 32
> network.remote-dio: off
> user.cifs: off
> performance.io-cache: off
> server.allow-insecure: on
> features.shard: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> nfs.disable: on
>
> If any other tweaks can be done,  please let me know.
> Thank you !
>
> Leo
>
>
> On Fri, Sep 14, 2018 at 12:01 PM, Leo David  wrote:
>
>> Hi Everyone,
>> So i have decided to take out all of the gluster volume custom options,
>> and add them one by one while activating/deactivating the storage domain &
>> rebooting one vm after each  added option :(
>>
>> The default options that giving bad iops ( ~1-2k) performance are :
>>
>> performance.stat-prefetch on
>> cluster.eager-lock enable
>> performance.io-cache off
>> performance.read-ahead off
>> performance.quick-read off
>> user.cifs off
>> network.ping-timeout 30
>> network.remote-dio off
>> performance.strict-o-direct on
>> performance.low-prio-threads 32
>>
>> After adding only:
>>
>>
>> server.allow-insecure on
>> features.shard on
>> storage.owner-gid 36
>> storage.owner-uid 36
>> transport.address-family inet
>> nfs.disable on
>> The performance increased to 7k-10k iops.
>>
>> The problem is that i don't know if that's sufficient ( maybe it can be
>> more improved ) , or even worse than this there might be chances to into
>> different volume issues by taking out some volume really needed options...
>>
>> If would have handy the default options that are applied to volumes as
>> optimization in a 3way replica, I think that might help..
>>
>> Any thoughts ?
>>
>> Thank you very much !
>>
>>
>> Leo
>>
>>
>>
>>
>>
>> On Fri, Sep 14, 2018 at 8:54 AM, Leo David  wrote:
>>
>>> Any thoughs on these ? Is that UI optimization only a gluster volume
>>> custom configuration ? If so, i guess it can be done from cli, but I am not
>>> aware of the corect optimized parameters of the volume
>>>
>>>
>>> On Thu, Sep 13, 2018, 18:25 Leo David  wrote:
>>>
 Thank you Jayme. I am trying to do this, but I am getting an error,
 since the volume is replica 1 distribute, and it seems that oVirt expects a
 replica 3 volume.
 Would it be another way to optimize the volume in this situation ?


 On Thu, Sep 13, 2018, 17:49 Jayme  wrote:

> I had similar problems until I clicked "optimize volume for vmstore"
> in the admin GUI for each data volume.  I'm not sure if this is what is
> causing your problem here but I'd recommend trying that first.  It is
> suppose to be optimized by default but for some reason my ovirt 4.2 
> cockpit
> deploy did not apply those settings automatically.
>
> On Thu, Sep 13, 2018 at 10:21 AM Leo David  wrote:
>
>> Hi Everyone,
>> I am encountering the following issue on a single instance
>> hyper-converged 4.2 setup.
>> The following fio test was done:
>>
>> fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1
>> --name=test --filename=test --bs=4k --iodepth=64 --size=4G
>> --readwrite=randwrite
>> The results are very poor doing the test inside of a vm with a
>> prealocated disk on the ssd store:  ~2k IOPS
>> Same test done on the oVirt node directly on the mounted ssd_lvm:
>> ~30k IOPS
>> Same test done, this time on the gluster mount path: ~20K IOPS
>>
>> What could be the issue that the vms have this slow hdd performance (
>> 2k on ssd !! )?
>> Thank you very much !
>>
>>
>>
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCCIZFRWINWWLQSYWRPF6HNKPQMZB2XP/
>>
>
>>
>>
>> --
>> Best 

[ovirt-users] Re: Python-SDK4- Issue following links

2019-02-25 Thread Don Dupuis
Thanks for the clarification Joey.

Don

On Mon, Feb 25, 2019 at 12:06 AM Joey Ma  wrote:

> Hi Don,
>
> Please using `network.vnic_profiles` instead of `network.vnicprofiles` as
> the parameter of  `connection.follow_link`.
>
> Regards,
> Joey
>
>
> On Mon, Feb 25, 2019 at 9:22 AM Don Dupuis  wrote:
>
>> Hi
>>
>> I am trying to write some code to update the names of existing
>> vnicprofiles in ovirt-4.2. The problem I am having is trying to follow the
>> links to the vnicprofiles. Below is web info that I am trying to get:
>>
>> > href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4"
>> id="740cae1f-c49f-4563-877a-5ce173e83be4">ovirtmgmtLOOKING> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/permissions"
>> rel="permissions"/>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/vnicprofiles"
>> rel="vnicprofiles"/>> href="/ovirt-engine/api/networks/740cae1f-c49f-4563-877a-5ce173e83be4/networklabels"
>> rel="networklabels"/>0falsevm> id="4050"/>> href="/ovirt-engine/api/datacenters/1d00d32b-abdc-43cd-b990-257aaf01d514"
>> id="1d00d32b-abdc-43cd-b990-257aaf01d514"/>
>>
>> Below is the code that I am trying to do the same thing and I want to
>> follow the vnicprofiles link to get to the actual data that I want to
>> change:
>> #!/usr/bin/env python
>>
>> import logging
>> import time
>> import string
>> import sys
>> import os
>> import MySQLdb
>>
>> import ovirtsdk4 as sdk
>> import ovirtsdk4.types as types
>>
>> #logging.basicConfig(level=logging.DEBUG, filename='/tmp/addhost.log')
>>
>> ### Variables to be used ###
>> #NUMANODE = 3
>> #MEM = 20
>> GB = 1024 * 1024 * 1024
>> #MEMORY = MEM * GB
>> GB = 1024 * 1024 * 1024
>> URL = 'https://host/ovirt-engine/api'
>> CAFILE = '/etc/pki/ovirt-engine/ca.pem'
>> USERNAME = 'admin@internal'
>> PASSWORD = 'password'
>> HOSTNAME = 'rvs06'
>>
>> connection = sdk.Connection(
>> url=URL,
>> username=USERNAME,
>> password=PASSWORD,
>> #ca_file='ca.pem',
>> debug='True',
>> insecure='True',
>> #log=logging.getLogger(),
>> )
>>
>> #dcs_service = connection.system_service().data_centers_service()
>> #dc = dcs_service.list(search='cluster=%s-local' % HOSTNAME)[0]
>> #network = dcs_service.service(dc.id).networks_service()
>> networks_service = connection.system_service().networks_service()
>> network = networks_service.list(
>> search='name=ovirtmgmt and datacenter=%s-local' % HOSTNAME) [0]
>> print ("Network name is %s" % network.name)
>> print ("Network id is %s" % network.id)
>> vnic = connection.follow_link(network.vnicprofiles)
>>
>> connection.close()
>>
>> Below is the output of my code:
>>
>> ./update-vnic.py
>> Network name is ovirtmgmt
>> Network id is 740cae1f-c49f-4563-877a-5ce173e83be4
>> Traceback (most recent call last):
>>   File "./update-vnic.py", line 46, in 
>> vnic = connection.follow_link(network.vnicprofiles)
>> AttributeError: 'Network' object has no attribute 'vnicprofiles'
>>
>> The network name and network id is correct. Any help would be appreciated
>> on what I am missing or what I am doing wrong. The actual updating of the
>> name with code isn't written yet as I can't get past this part.
>>
>> Thanks
>>
>> Don
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PRV7MA2X3IS5WSXEEYAY54PPXFIMNRM4/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3TBXIX4VJ2Q6UGDHNXRFV3CTFRTCR32/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Simone Tiraboschi
OK, try this:
temporary
edit 
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/02_create_local_vm.yml
around line 120
and edit tasks "Get local VM IP"
changing from "retries: 50" to  "retries: 500" so that you have more time
to debug it



On Mon, Feb 25, 2019 at 4:20 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> I retried after killing the remaining qemu process and
> doing ovirt-hosted-engine-cleanup
> The new attempt failed again at the same step. Then after it fails, it
> cleans the temporary files (and vm disk) but *qemu still runs!* :
>
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
> "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk
> '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end":
> "2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25
> 16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "",
> "stdout_lines": []}
> [ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
> [ INFO  ] changed: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
> /etc/hosts for the local VM]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a
> failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
> system may not be provisioned according to the playbook results: please
> check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Clean up
> [ INFO  ] Cleaning temporary resources
> ...
>
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
> /etc/hosts for the local VM]
> [ INFO  ] ok: [localhost]
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: please check the logs for the
> issue, fix accordingly or re-deploy from scratch.
>
>
>
> [root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu
> root  4021  0.0  0.0  24844  1788 ?Ss   févr.22   0:00
> /usr/bin/qemu-ga --method=virtio-serial
> --path=/dev/virtio-ports/org.qemu.guest_agent.0
> --blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status
> -F/etc/qemu-ga/fsfreeze-hook
> qemu 26463 22.9  4.8 17684512 1088844 ?Sl   16:01   3:09
> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
> -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
> 4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=27,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
> menu=off,strict=on -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive
> file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,fd=31,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
> -object rng-random,id=objrng0,filename=/dev/random -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
> -msg timestamp=on
> root 28416  0.0  0.0 112712   980 pts/3S+   16:14   0:00 grep
> --color=auto qemu
>
>
> Before the first Error, while the vm was running for sure and the disk was
> there, I also unsuccessfuly tried to connect to it with VNC  and got the
> same error I got before :
>
> [root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
I retried after killing the remaining qemu process and
doing ovirt-hosted-engine-cleanup
The new attempt failed again at the same step. Then after it fails, it
cleans the temporary files (and vm disk) but *qemu still runs!* :

[ INFO  ] TASK [ovirt.hosted_engine_setup : Get local VM IP]

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true,
"cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:6c:e8:f9 | awk
'{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.092436", "end":
"2019-02-25 16:09:38.863263", "rc": 0, "start": "2019-02-25
16:09:38.770827", "stderr": "", "stderr_lines": [], "stdout": "",
"stdout_lines": []}
[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO  ] changed: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
/etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO  ] Stage: Clean up
[ INFO  ] Cleaning temporary resources
...

[ INFO  ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in
/etc/hosts for the local VM]
[ INFO  ] ok: [localhost]
[ INFO  ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20190225161011.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.



[root@vs-inf-int-kvm-fr-301-210 ~]# ps aux | grep qemu
root  4021  0.0  0.0  24844  1788 ?Ss   févr.22   0:00
/usr/bin/qemu-ga --method=virtio-serial
--path=/dev/virtio-ports/org.qemu.guest_agent.0
--blacklist=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status
-F/etc/qemu-ga/fsfreeze-hook
qemu 26463 22.9  4.8 17684512 1088844 ?Sl   16:01   3:09
/usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
-object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
-machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
4,sockets=4,cores=1,threads=1 -uuid 316eca5f-81de-4a0b-af1f-58f910402a8e
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=27,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
menu=off,strict=on -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/tmp/localvmdRIozH/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/var/tmp/localvmdRIozH/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:6c:e8:f9,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,fd=31,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
-vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
-object rng-random,id=objrng0,filename=/dev/random -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on
root 28416  0.0  0.0 112712   980 pts/3S+   16:14   0:00 grep
--color=auto qemu


Before the first Error, while the vm was running for sure and the disk was
there, I also unsuccessfuly tried to connect to it with VNC  and got the
same error I got before :

[root@vs-inf-int-kvm-fr-301-210 ~]# debug1: Connection to port 5900
forwarding to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested.
debug1: channel 3: new [direct-tcpip]
channel 3: open failed: connect failed: Connection refused
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port
37002 to 127.0.0.1 port 5900, nchannels 4


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Mon, Feb 25, 2019 at 11:57 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Something was 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
Something was definitely wrong ; as indicated, qemu process
for guest=HostedEngineLocal was running but the disk file did not exist
anymore...
No surprise I could not connect

I am retrying


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Mon, Feb 25, 2019 at 11:15 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> It fails too :
> I made sure PermitTunnel=yes in sshd config but when I try to connect to
> the forwarded port I get the following error on the openened host ssh
> session :
>
> [gpavese@sheepora-X230 ~]$ ssh -v -L 5900:
> vs-inf-int-kvm-fr-301-210.hostics.fr:5900
> r...@vs-inf-int-kvm-fr-301-210.hostics.fr
> ...
> [root@vs-inf-int-kvm-fr-301-210 ~]#
> debug1: channel 3: free: direct-tcpip: listening port 5900 for
> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port
> 42144 to ::1 port 5900, nchannels 4
> debug1: Connection to port 5900 forwarding to
> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested.
> debug1: channel 3: new [direct-tcpip]
> channel 3: open failed: connect failed: Connection refused
> debug1: channel 3: free: direct-tcpip: listening port 5900 for
> vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1
> port 32778 to 127.0.0.1 port 5900, nchannels 4
>
>
> and in journalctl :
>
> févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]:
> error: connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi 
> wrote:
>
>>
>>
>>
>> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> I made sure of everything and even stopped firewalld but still can't
>>> connect :
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat
>>> /var/run/libvirt/qemu/HostedEngineLocal.xml
>>>  >> *listen='127.0.0.1*'>
>>> >> autoGenerated='no'/>
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59
>>> tcp0  0 127.0.0.1:5900  0.0.0.0:*
>>>  LISTEN  13376/qemu-kvm
>>>
>>
>>
>> I suggest to try ssh tunneling, run
>> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900
>> r...@vs-inf-int-kvm-fr-301-210.hostics.fr
>>
>> and then
>> remote-viewer vnc://localhost:5900
>>
>>
>>
>>>
>>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service
>>> ● firewalld.service - firewalld - dynamic firewall daemon
>>>Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled;
>>> vendor preset: enabled)
>>>*Active: inactive (dead)*
>>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr
>>>  systemd[1]: Stopped firewalld
>>> - dynamic firewall daemon.*
>>>
>>> From my laptop :
>>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>>> *5900*
>>> Trying 10.199.210.11...
>>> [*nothing gets through...*]
>>> ^C
>>>
>>> For making sure :
>>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>>> *9090*
>>> Trying 10.199.210.11...
>>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr.
>>> Escape character is '^]'.
>>>
>>>
>>>
>>>
>>>
>>> Guillaume Pavese
>>> Ingénieur Système et Réseau
>>> Interactiv-Group
>>>
>>>
>>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal 
>>> wrote:
>>>
 Hey!

 You can check under /var/run/libvirt/qemu/HostedEngine.xml
 Search for 'vnc'
 From there you can look up the port on which the HE VM is available and
 connect to the same.


 On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
 guillaume.pav...@interactiv-group.com> wrote:

> 1) I am running in a Nested env, but under libvirt/kvm on remote
> Centos 7.4 Hosts
>
> Please advise how to connect with VNC to the local HE vm. I see it's
> running, but this is on a remote host, not my local machine :
> qemu 13376  100  3.7 17679424 845216 ? Sl   12:46  85:08
> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
> -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=27,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
> menu=off,strict=on -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
> -device
> 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
It fails too :
I made sure PermitTunnel=yes in sshd config but when I try to connect to
the forwarded port I get the following error on the openened host ssh
session :

[gpavese@sheepora-X230 ~]$ ssh -v -L 5900:
vs-inf-int-kvm-fr-301-210.hostics.fr:5900
r...@vs-inf-int-kvm-fr-301-210.hostics.fr
...
[root@vs-inf-int-kvm-fr-301-210 ~]#
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from ::1 port 42144
to ::1 port 5900, nchannels 4
debug1: Connection to port 5900 forwarding to
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900 requested.
debug1: channel 3: new [direct-tcpip]
channel 3: open failed: connect failed: Connection refused
debug1: channel 3: free: direct-tcpip: listening port 5900 for
vs-inf-int-kvm-fr-301-210.hostics.fr port 5900, connect from 127.0.0.1 port
32778 to 127.0.0.1 port 5900, nchannels 4


and in journalctl :

févr. 25 14:55:38 vs-inf-int-kvm-fr-301-210.hostics.fr sshd[19595]: error:
connect_to vs-inf-int-kvm-fr-301-210.hostics.fr port 5900: failed.


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Mon, Feb 25, 2019 at 10:44 PM Simone Tiraboschi 
wrote:

>
>
>
> On Mon, Feb 25, 2019 at 2:35 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> I made sure of everything and even stopped firewalld but still can't
>> connect :
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# cat
>> /var/run/libvirt/qemu/HostedEngineLocal.xml
>>  > *listen='127.0.0.1*'>
>> > autoGenerated='no'/>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# netstat -pan | grep 59
>> tcp0  0 127.0.0.1:5900  0.0.0.0:*
>>  LISTEN  13376/qemu-kvm
>>
>
>
> I suggest to try ssh tunneling, run
> ssh -L 5900:vs-inf-int-kvm-fr-301-210.hostics.fr:5900
> r...@vs-inf-int-kvm-fr-301-210.hostics.fr
>
> and then
> remote-viewer vnc://localhost:5900
>
>
>
>>
>> [root@vs-inf-int-kvm-fr-301-210 ~]# systemctl status firewalld.service
>> ● firewalld.service - firewalld - dynamic firewall daemon
>>Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled;
>> vendor preset: enabled)
>>*Active: inactive (dead)*
>> *févr. 25 14:24:03 vs-inf-int-kvm-fr-301-210.hostics.fr
>>  systemd[1]: Stopped firewalld
>> - dynamic firewall daemon.*
>>
>> From my laptop :
>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>> *5900*
>> Trying 10.199.210.11...
>> [*nothing gets through...*]
>> ^C
>>
>> For making sure :
>> [gpavese@sheepora-X230 ~]$ telnet vs-inf-int-kvm-fr-301-210.hostics.fr
>> *9090*
>> Trying 10.199.210.11...
>> *Connected* to vs-inf-int-kvm-fr-301-210.hostics.fr.
>> Escape character is '^]'.
>>
>>
>>
>>
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>>
>> On Mon, Feb 25, 2019 at 10:24 PM Parth Dhanjal  wrote:
>>
>>> Hey!
>>>
>>> You can check under /var/run/libvirt/qemu/HostedEngine.xml
>>> Search for 'vnc'
>>> From there you can look up the port on which the HE VM is available and
>>> connect to the same.
>>>
>>>
>>> On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
>>> guillaume.pav...@interactiv-group.com> wrote:
>>>
 1) I am running in a Nested env, but under libvirt/kvm on remote Centos
 7.4 Hosts

 Please advise how to connect with VNC to the local HE vm. I see it's
 running, but this is on a remote host, not my local machine :
 qemu 13376  100  3.7 17679424 845216 ? Sl   12:46  85:08
 /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
 -object
 secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
 -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
 Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
 -no-user-config -nodefaults -chardev
 socket,id=charmonitor,fd=27,server,nowait -mon
 chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
 -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
 menu=off,strict=on -device
 virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
 file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
 -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive
 file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
 -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
 tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3
 -chardev pty,id=charserial0 -device
 isa-serial,chardev=charserial0,id=serial0 -chardev
 socket,id=charchannel0,fd=31,server,nowait -device
 

[ovirt-users] Re: HostedEngine Unreachable

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 2:53 PM Sakhi Hadebe  wrote:

> Hi  Simone,
>
> The name resolution is working fine.
>
> Running:
> getent ahosts engine.sanren.ac.za
>
> gives the same output:
>
> 192.168.x.xSTREAM engine.sanren.ac.za
> 192.168.x.xDGRAM
> 192.168.x.xRAW
>  The IP address being the engine IP Address.
>
> The hosts are on the same subnet as the HostedEngine. I can ping the hosts
> from each other. There is a NATting device, that NATs the private IP
> addresses to public IP addresses. From the hosts I can reach the internet,
> but I cant ping from outside (security measure). We also using this device
> to VPN to the cluster
>
> *[root@gohan ~]# virsh -r list*
> * IdName   State*
> **
> * 13HostedEngine   running*
>
> *[root@gohan ~]# virsh -r dumpxml HostedEngine | grep -i tlsPort
>  listen='192.168.x.x' passwdValidTo='1970-01-01T00:00:01'>*
>
> *[root@gohan ~]# virsh -r vncdisplay HostedEngine*
> *192.168.x.x:0*
>
> *[root@gohan ~]# hosted-engine --add-console-password*
> *Enter password:*
>
> *You can now connect the hosted-engine VM with VNC at 192.168.x.x:5900*
>
> VNCing with the above gives me the blank black screen and a cursor (See
> attached). I cant do anything on that screen.
>

OK, I can suggest to to power off the engine VM with hosted-engine
--vm-poweroff and then start it again with hosted-engine --vm-start in
order to follow the whole boot process over VNC


>
>
> Please help
>
>
>
> On Mon, Feb 25, 2019 at 11:34 AM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Feb 25, 2019 at 10:27 AM Sakhi Hadebe  wrote:
>>
>>> Hi Simone,
>>>
>>> Thank you for your response. Executing the command below, gives this:
>>>
>>> [root@ovirt-host]# curl http://$(grep fqdn 
>>> /etc/ovirt-hosted-engine/hosted-engine.conf
>>> | cut -d= -f2)/ovirt-engine/services/health
>>> curl: (7) Failed to connect to engine.sanren.ac.za:80; No route to host
>>>
>>> I tried to enable http traffic on the ovirt-host, but the error persists
>>>
>>
>> Run, on your hosts,
>> getent ahosts engine.sanren.ac.za
>>
>> and ensure that it got resolved as you wish.
>> Fix name resolution and routing on your hosts in a coherent manner.
>>
>>
>
> --
> Regards,
> Sakhi Hadebe
>
> Engineer: South African National Research Network (SANReN)Competency Area, 
> Meraka, CSIR
>
> Tel:   +27 12 841 2308 <+27128414213>
> Fax:   +27 12 841 4223 <+27128414223>
> Cell:  +27 71 331 9622 <+27823034657>
> Email: sa...@sanren.ac.za 
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DBF45KOFSIHT7ZGSX5HMDYIP5Y22MOYG/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Parth Dhanjal
Hey!

You can check under /var/run/libvirt/qemu/HostedEngine.xml
Search for 'vnc'
>From there you can look up the port on which the HE VM is available and
connect to the same.


On Mon, Feb 25, 2019 at 6:47 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> 1) I am running in a Nested env, but under libvirt/kvm on remote Centos
> 7.4 Hosts
>
> Please advise how to connect with VNC to the local HE vm. I see it's
> running, but this is on a remote host, not my local machine :
> qemu 13376  100  3.7 17679424 845216 ? Sl   12:46  85:08
> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
> -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=27,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
> menu=off,strict=on -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive
> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,fd=31,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> *-vnc 127.0.0.1:0  -device 
> VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
> -object rng-random,id=objrng0,filename=/dev/random -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
> -msg timestamp=on
>
>
> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat
> /etc/libvirt/qemu/networks/default.xml
> 
>
> 
>   default
>   ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6
>   
>   
>   
>   
> 
>   
> 
>   
> 
> You have new mail in /var/spool/mail/root
> [root@vs-inf-int-kvm-fr-301-210 ~]
>
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> He deployment with "hosted-engine --deploy" fails at TASK
>>> [ovirt.hosted_engine_setup : Get local VM IP]
>>>
>>> See following Error :
>>>
>>> 2019-02-25 12:46:50,154+0100 INFO
>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
>>> local VM IP]
>>> 2019-02-25 12:55:26,823+0100 DEBUG
>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>> ansible_utils._process_output:103 {u'_ansible_parsed': True,
>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end':
>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'',
>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True,
>>> u'executable':
>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases
>>> default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'",
>>> u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std
>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50,
>>> u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []}
>>> 2019-02-25 12:55:26,924+0100 ERROR
>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
>>> "", "stdout_lines": []}
>>>
>>
>> Here we are just waiting for the bootstrap engine VM to fetch an IP
>> address from default libvirt network over DHCP but it your case it never
>> happened.
>> Possible issues: something went wrong in the bootstrap process for the
>> engine VM or the default libvirt network is not correctly configured.
>>
>> 1. can you try to reach the engine VM 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 2:14 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> 1) I am running in a Nested env, but under libvirt/kvm on remote Centos
> 7.4 Hosts
>
> Please advise how to connect with VNC to the local HE vm. I see it's
> running, but this is on a remote host, not my local machine :
> qemu 13376  100  3.7 17679424 845216 ? Sl   12:46  85:08
> /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
> -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
> -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
> 4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,fd=27,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
> menu=off,strict=on -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive
> file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
> -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
> tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev
> socket,id=charchannel0,fd=31,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
> *-vnc 127.0.0.1:0  -device 
> VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
> -object rng-random,id=objrng0,filename=/dev/random -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
> -msg timestamp=on
>

>From your laptop, run
remote-viewer vnc://:5900

You should eventually open VNC port on firewalld if still not open.


>
>
> 2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat
> /etc/libvirt/qemu/networks/default.xml
> 
>
> 
>   default
>   ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6
>   
>   
>   
>   
> 
>   
> 
>   
> 
> You have new mail in /var/spool/mail/root
> [root@vs-inf-int-kvm-fr-301-210 ~]
>

OK, this looks fine.


>
>
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
>> guillaume.pav...@interactiv-group.com> wrote:
>>
>>> He deployment with "hosted-engine --deploy" fails at TASK
>>> [ovirt.hosted_engine_setup : Get local VM IP]
>>>
>>> See following Error :
>>>
>>> 2019-02-25 12:46:50,154+0100 INFO
>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
>>> local VM IP]
>>> 2019-02-25 12:55:26,823+0100 DEBUG
>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>> ansible_utils._process_output:103 {u'_ansible_parsed': True,
>>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
>>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end':
>>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'',
>>> u'changed': True, u'invocation': {u'module_args': {u'warn': True,
>>> u'executable':
>>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases
>>> default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'",
>>> u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std
>>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50,
>>> u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []}
>>> 2019-02-25 12:55:26,924+0100 ERROR
>>> otopi.ovirt_hosted_engine_setup.ansible_utils
>>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
>>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
>>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
>>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
>>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
>>> "", "stdout_lines": []}
>>>
>>
>> Here we are just waiting for the bootstrap engine VM to fetch an IP
>> address from default libvirt network over DHCP but it your case it never
>> happened.
>> Possible issues: something went wrong in the bootstrap process for the
>> engine VM or the default libvirt network is not correctly configured.
>>
>> 1. can you try to reach the engine VM via VNC and check what's 

[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
1) I am running in a Nested env, but under libvirt/kvm on remote Centos 7.4
Hosts

Please advise how to connect with VNC to the local HE vm. I see it's
running, but this is on a remote host, not my local machine :
qemu 13376  100  3.7 17679424 845216 ? Sl   12:46  85:08
/usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S
-object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes
-machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Haswell-noTSX,+kvmclock -m 16384 -realtime mlock=off -smp
4,sockets=4,cores=1,threads=1 -uuid 6fe7c1c3-ea93-4343-a385-0d9e14bb563a
-no-user-config -nodefaults -chardev
socket,id=charmonitor,fd=27,server,nowait -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot
menu=off,strict=on -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/tmp/localvmgmyYik/images/15023c8a-e3a7-4851-a97d-3b90996b423b/07fdcff3-11ce-4f7c-af05-0a878593e78e,format=qcow2,if=none,id=drive-virtio-disk0
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/var/tmp/localvmgmyYik/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:3e:fe:28,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
socket,id=charchannel0,fd=31,server,nowait -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0
*-vnc 127.0.0.1:0  -device
VGA*,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2
-object rng-random,id=objrng0,filename=/dev/random -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on


2) [root@vs-inf-int-kvm-fr-301-210 ~]# cat
/etc/libvirt/qemu/networks/default.xml



  default
  ba7bbfc8-28b8-459e-a42d-c2d6218e2cb6
  
  
  
  

  

  

You have new mail in /var/spool/mail/root
[root@vs-inf-int-kvm-fr-301-210 ~]



Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Mon, Feb 25, 2019 at 9:57 PM Simone Tiraboschi 
wrote:

>
>
> On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> He deployment with "hosted-engine --deploy" fails at TASK
>> [ovirt.hosted_engine_setup : Get local VM IP]
>>
>> See following Error :
>>
>> 2019-02-25 12:46:50,154+0100 INFO
>> otopi.ovirt_hosted_engine_setup.ansible_utils
>> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
>> local VM IP]
>> 2019-02-25 12:55:26,823+0100 DEBUG
>> otopi.ovirt_hosted_engine_setup.ansible_utils
>> ansible_utils._process_output:103 {u'_ansible_parsed': True,
>> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
>> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end':
>> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'',
>> u'changed': True, u'invocation': {u'module_args': {u'warn': True,
>> u'executable':
>> None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases
>> default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'",
>> u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std
>> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50,
>> u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []}
>> 2019-02-25 12:55:26,924+0100 ERROR
>> otopi.ovirt_hosted_engine_setup.ansible_utils
>> ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
>> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
>> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
>> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
>> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
>> "", "stdout_lines": []}
>>
>
> Here we are just waiting for the bootstrap engine VM to fetch an IP
> address from default libvirt network over DHCP but it your case it never
> happened.
> Possible issues: something went wrong in the bootstrap process for the
> engine VM or the default libvirt network is not correctly configured.
>
> 1. can you try to reach the engine VM via VNC and check what's happening
> there? (another question, are you running it nested? AFAIK it will not work
> if nested over ESXi)
> 2. can you please share the output of
> cat /etc/libvirt/qemu/networks/default.xml
>
>
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org

[ovirt-users] Re: Gluster - performance.strict-o-direct and other performance tuning in different storage backends

2019-02-25 Thread Leo David
Thank you Krutika,
Does it mean that turning that setting off, i have chances to get into data
corruption ?
It seems to have a pretty big impact on vm performance..

On Mon, Feb 25, 2019, 12:40 Krutika Dhananjay  wrote:

> Gluster's write-behind translator by default buffers writes for flushing
> to disk later, *even* when the file is opened with O_DIRECT flag. Not
> honoring O_DIRECT could mean a reader from another client could be READing
> stale data from bricks because some WRITEs may not yet be flushed to disk.
> performance.strict-o-direct=on is one of the options needed to truly honor
> O_DIRECT behavior which is what qemu uses by virtue of cache=none option
> being set (the other being network.remote-dio=off) on the vm(s)
>
> -Krutika
>
>
> On Mon, Feb 25, 2019 at 2:50 PM Leo David  wrote:
>
>> Hello Everyone,
>> As per some previous posts,  this "performance.strict-o-direct=on"
>> setting caused trouble or poor vm iops.  I've noticed that this option is
>> still part of default setup or automatically configured with
>> "Optimize for virt. store" button.
>> In the end... is this setting a good or a bad practice for setting the vm
>> storage volume ?
>> Does it depends ( like maybe other gluster performance options ) on the
>> storage backend:
>> - raid type /  jbod
>> - raid controller cache size
>> I am usually using jbod disks attached to lsi hba card ( no cache ). Any
>> gluster recommendations regarding this setup ?
>> Is there any documentation for best practices on configurating ovirt's
>> gluster for different types of storage backends ?
>> Thank you very much !
>>
>> Have a great week,
>>
>> Leo
>>
>> --
>> Best regards, Leo David
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FKL42JSHIKPMKLLMDPKYM4XT4V5GT4W/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSZPLEKPHVMJHVDKLU4FPJR4TPVWJYIN/


[ovirt-users] Re: [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 1:14 PM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> He deployment with "hosted-engine --deploy" fails at TASK
> [ovirt.hosted_engine_setup : Get local VM IP]
>
> See following Error :
>
> 2019-02-25 12:46:50,154+0100 INFO
> otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
> local VM IP]
> 2019-02-25 12:55:26,823+0100 DEBUG
> otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:103 {u'_ansible_parsed': True,
> u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
> :16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end':
> u'2019-02-25 12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'',
> u'changed': True, u'invocation': {u'module_args': {u'warn': True,
> u'executable':
> None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases
> default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'",
> u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std
> in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50,
> u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []}
> 2019-02-25 12:55:26,924+0100 ERROR
> otopi.ovirt_hosted_engine_setup.ansible_utils
> ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
> {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
> | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
> "0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
> "2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
> "", "stdout_lines": []}
>

Here we are just waiting for the bootstrap engine VM to fetch an IP address
from default libvirt network over DHCP but it your case it never happened.
Possible issues: something went wrong in the bootstrap process for the
engine VM or the default libvirt network is not correctly configured.

1. can you try to reach the engine VM via VNC and check what's happening
there? (another question, are you running it nested? AFAIK it will not work
if nested over ESXi)
2. can you please share the output of
cat /etc/libvirt/qemu/networks/default.xml


>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2YYNMOMD7I5NX7RZQ2IW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/J7YKJ6DLDTUBLWZVC5IT3CXHKXTJZ3BE/


[ovirt-users] Re: Problem deploying self-hosted engine on ovirt 4.3.0

2019-02-25 Thread matteo fedeli
ovirt-engine-appliance differ in what?

but If I do "ls -l" show me the total size of file in var/tmp are some MBs 
It's normal? While "df -ah" say the var folder is occupied at 96% (15GB) 
total... Very strange... The others node have 1% of occupied space... is there 
any way to cleanup? How can I solve this problem? 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RI7LG3JZ7ZD6NZALEVGQVUPPA6PN34WS/


[ovirt-users] Re: Problem deploying self-hosted engine on ovirt 4.3.0

2019-02-25 Thread Sahina Bose
On Mon, Feb 25, 2019 at 2:51 PM matteo fedeli  wrote:
>
> oh, ovirt-engine-appliance where I can found?

ovirt-engine-appliance rpm is present in the oVirt repo
(https://resources.ovirt.org/pub/ovirt-4.3/rpm/el7/x86_64/ for 4.3)
>
> At the end I wait in total 3 hour (is not too much?) and the deploy arrived 
> almost at the end but in the last step i get this error:
> http://oi64.tinypic.com/312yfev.jpg. To retry i do 
> /usr/sbin/ovirt-hosted-engine-cleanup and cleaned engine folder... I add that 
> I seen varius skipping...
>
> But now I get this: http://oi64.tinypic.com/kbpxzl.jpg. Before the second 
> deploy the node status and gluster was all ok...

The error seems to indicate that you do not have enough space in /var/tmp

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/67G4NISLNTKKW6QIJADAZ5GXH2ZHZXG2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JQCDSOVJLWQUPSSD5WYKKHQTLDOYSMAP/


[ovirt-users] Re: HC : JBOD or RAID5/6 for NVME SSD drives?

2019-02-25 Thread Guillaume Pavese
Thanks Jayme,

We currently use H730 PERC cards on our test cluster but we are not set on
anything yet for the production cluster.
We are indeed worried about losing a drive in JBOD mode. Would setting up a
RAID1 of NVME drives with mdadm, and then use that as the JBOD drive for
the volume, be a *good* idea? Is that even possible/ something that people
do?


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Sat, Feb 23, 2019 at 2:51 AM Jayme  wrote:

> Personally I feel like raid on top of GlusterFS is too wasteful.  It would
> give you a few advantages such as being able to replace a failed drive at
> raid level vs replacing bricks with Gluster.
>
> In my production HCI setup I have three Dell hosts each with two 2Tb SSDs
> in JBOD.  I find this setup works well for me, but I have not yet run in to
> any drive failure scenarios.
>
> What Perc card do you have in the dell machines?   Jbod is tough with most
> Perc cards, in many cases to do Jbod you have to fake it using individual
> raid 0 for each drive.  Only some perc controllers allow true jbod
> passthrough.
>
> On Fri, Feb 22, 2019 at 12:30 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> Hi,
>>
>> We have been evaluating oVirt HyperConverged for 9 month now with a test
>> cluster of 3 DELL Hosts with Hardware RAID5 on PERC card.
>> We were not impressed with the performance...
>> No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a
>> ram device. Perf were almost unchanged.
>>
>> It seems that LV Cache is its own source of bugs and problems anyway, so
>> we are thinking going for full NVME drives when buying the production
>> cluster.
>>
>> What would the recommandation be in that case, JBOD or RAID?
>>
>> Thanks
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IODRDUEIZBPT2RMEPWCXBTJUU3LV3JUD/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KEVWLTZTSKX3AVVUXO46DD3U7DEUNUXE/


[ovirt-users] [oVirt 4.3.1 Test Day] cmdline HE Deployment

2019-02-25 Thread Guillaume Pavese
He deployment with "hosted-engine --deploy" fails at TASK
[ovirt.hosted_engine_setup : Get local VM IP]

See following Error :

2019-02-25 12:46:50,154+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
local VM IP]
2019-02-25 12:55:26,823+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {u'_ansible_parsed': True,
u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': u'2019-02-25
12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', u'changed':
True, u'invocation': {u'module_args': {u'warn': True, u'executable':
None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases
default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'",
u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std
in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50,
u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []}
2019-02-25 12:55:26,924+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
{"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
| grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
"0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
"2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
"", "stdout_lines": []}

Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXRMU3SQWTMB2YYNMOMD7I5NX7RZQ2IW/


[ovirt-users] Re: Unable to change cluster and data center compatibility version

2019-02-25 Thread Staniforth, Paul
Hi Jonathon,

  it's difficult to say as it's so long since I ran that 
version.


I notice on 4.2 nodes that they have

Cluster Compatibility Version:  3.6,4.0,4.1,4.2


Would it to be better to update the hosts first?



Regards,
Paul S.








From: Jonathan Mathews 
Sent: 25 February 2019 11:11
To: Staniforth, Paul
Cc: Shani Leviim; Ovirt Users
Subject: Re: [ovirt-users] Re: Unable to change cluster and data center 
compatibility version

Hi Paul

Thank you for bringing this to my attention.

It does look like I will need to upgrade all my hosts to CentOS 7.

But would the prevent me from changing my cluster compatibility version to 3.6?

Thanks

On Wed, Feb 20, 2019 at 11:59 AM Staniforth, Paul 
mailto:p.stanifo...@leedsbeckett.ac.uk>> wrote:

Does oVirt 4.0 support Centos6 hosts?


Regards,

 Paul S.


From: Jonathan Mathews mailto:jm3185...@gmail.com>>
Sent: 20 February 2019 06:25
To: Shani Leviim
Cc: Ovirt Users
Subject: [ovirt-users] Re: Unable to change cluster and data center 
compatibility version

Hi Shani

Yes, I did try and change the cluster compatibility first, and that's when I 
got the error: Ovirt: Some of the hosts still use legacy protocol which is not 
supported by cluster 3.6 or higher. In order to change it a host needs to be 
put to maintenance and edited in advanced options section.

I did go through the article you suggested, however, I did not see anything 
that would help.

Thanks


On Tue, Feb 19, 2019 at 3:46 PM Shani Leviim 
mailto:slev...@redhat.com>> wrote:
Hi Jonathan,

Did you try first to change the compatibility of all clusters and then change 
the data center's compatibility?
This once seems related: https://bugzilla.redhat.com/show_bug.cgi?id=1375567

Regards,
Shani Leviim


On Tue, Feb 19, 2019 at 11:01 AM Jonathan Mathews 
mailto:jm3185...@gmail.com>> wrote:
Good Day

I have been trying to upgrade a clients oVirt from 3.6 to 4.0 but have run into 
an issue where I am unable to change the cluster and data center compatibility 
version.

I get the following error in the GUI:

Ovirt: Some of the hosts still use legacy protocol which is not supported by 
cluster 3.6 or higher. In order to change it a host needs to be put to 
maintenance and edited in advanced options section.

This error was received with all VM's off and all hosts in maintenance.

The environment has the following currently installed:

Engine - CentOS 7.4 - Ovirt Engine 3.6.7.5
Host1 - CentOS 6.9 - VDSM 4.16.30
Host2 - CentOS 6.9 - VDSM 4.16.30
Host3 - CentOS 6.9 - VDSM 4.16.30

I also have the following from engine.log

[root@ovengine ~]# tail -f /var/log/ovirt-engine/engine.log
2018-09-22 07:11:33,920 INFO  
[org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher] 
(DefaultQuartzScheduler_Worker-93) [7533985f] Fetched 0 VMs from VDS 
'd82a026c-31b4-4efc-8567-c4a6bdcaa826'
2018-09-22 07:11:34,685 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand] 
(DefaultQuartzScheduler_Worker-99) [4b7e3710] FINISH, 
DisconnectStoragePoolVDSCommand, log id: 1ae6f0a9
2018-09-22 07:11:34,687 INFO  
[org.ovirt.engine.core.bll.storage.DisconnectHostFromStoragePoolServersCommand] 
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] Running command: 
DisconnectHostFromStoragePoolServersCommand internal: true. Entities affected : 
 ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool
2018-09-22 07:11:34,706 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] 
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] START, 
DisconnectStorageServerVDSCommand(HostName = ovhost3, 
StorageServerConnectionManagementVDSParameters:{runAsync='true', 
hostId='d82a026c-31b4-4efc-8567-c4a6bdcaa826', 
storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3', storageType='NFS', 
connectionList='[StorageServerConnections:{id='3fdffb4c-250b-4a4e-b914-e0da1243550e',
 connection='172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1', iqn='null', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='4d95c8ca-435a-4e44-86a5-bc7f3a0cd606', 
connection='172.16.0.20:/data/ov-export', iqn='null', vfsType='null', 
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', 
iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='82ecbc89-bdf3-4597-9a93-b16f3a6ac117', 
connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB', iqn='null', 
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}, 
StorageServerConnections:{id='29bb3394-fb61-41c0-bb5a-1fa693ec2fe2', 
connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/iso', iqn='null', 
vfsType='null', mountOptions='null', nfsVersion='V3', nfsRetrans='null', 
nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 48c5ffd6

[ovirt-users] [oVirt 4.3.1 Test Day] Cockpit HE Deployment

2019-02-25 Thread Guillaume Pavese
As indicated on Trello,

HE deployment though cockpit is stuck at the beginning with "Please correct
errors before moving to next step", but no Error is explicitly shown or
highlighted.




Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AVKFP664XYR6QT7OZ5NHUJ2LLJP7N4WC/


[ovirt-users] Re: Problem with snapshots in illegal status

2019-02-25 Thread Bruno Rodriguez
Thank you very much, Shani

I've got what you asked and some good news as well. I'll start with the
good news: the snapshots status changes to OK if I try to live-migrate the
disk to another storage domain in the same cabin. Then I can delete them.
Just tried with two illegal snapshots and it worked, I'll send you the
engine logs as soon as I can check them.

If I go with the query to the database (the following one) I get 0 rows

# select * from images where vm_snapshot_id IN
('f649d9c1-563e-49d4-9fad-6bc94abc279b',
'5734df23-de67-41a8-88a1-423cecfe7260',
'f649d9c1-563e-49d4-9fad-6bc94abc279b',
'2929df28-eae8-4f27-afee-a984fe0b07e7',
'4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f',
'fbaff53b-30ce-4b20-8f10-80e70becb48c',
'c628386a-da6c-4a0d-ae7d-3e6ecda27d6d',
'e9ddaa5c-007d-49e6-8384-efefebb00aa6',
'5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4',
'7efe2e7e-ca24-4b27-b512-b42795c79ea4');

BTW, the dump-volume-chains is here. I deleted the disks with no snapshots
in illegal status

Images volume chains (base volume first)

   image:4dab82be-7068-4234-8478-45ec51a29bc0

 - 04f7bafd-aa56-4940-9c4e-e4055972284e
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - 0fe7e3bf-24a9-45b3-9f33-9d93349b2ac1
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:7428f6fd-ae2e-4691-b718-1422b1150798

 - a81b454e-17b1-4c6a-a7bc-803a0e3b2b03
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:c111cb15-5465-4056-b0bb-bd94565d6060

 - 2443c7f2-9ee3-4a92-a80e-c3ee1a481311
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:2900bc6b-3345-42a4-9533-667fb4009deb

 - e9ddaa5c-007d-49e6-8384-efefebb00aa6
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - 8be46861-3d44-4dd3-9216-a2817f9d0db9
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:79663e1f-c6a7-4193-91fa-d5cbe02f81e3

 - f649d9c1-563e-49d4-9fad-6bc94abc279b
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - f0a381bb-4110-43de-999c-99ae9cb03a5a
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:e1f3dafb-64be-46a9-a3dc-1f3f7fcd1960



 - 4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - 0fd199eb-ab26-46de-b0b4-b1be58d635c2
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:fcdeeff4-1f1a-4ced-b6e9-0746c54a7fe9

 - 5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - c133af2a-6642-4809-b2cc-267d4cbadec9
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:06216874-7c66-4e8c-ab3b-4dd4ae8281c1

 - 32459132-88f9-460d-8fb5-17ac41413cff
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:4b87c897-062c-4e71-874b-ce48376cc463

 - 16ac155d-cf61-4eb2-8227-6503ee6d3414
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:24acdba6-f600-464d-a662-22e42bbcdf1c

 - b04cc993-164e-49f1-9d46-e013bf3e66f3
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:562bf24c-d69c-4ec5-a670-e5eba5064fab

 - c628386a-da6c-4a0d-ae7d-3e6ecda27d6d
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - ecab3f58-0ae3-4d2a-b0df-dd712b3d5a70
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE


   image:0cc6d13d-5878-4fd2-bf54-5aef95017bc5

 - 05aa8ec3-674b-4a8e-be39-2bddc18369e4
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:30a88bec-4262-47ce-a329-59ed5350e10a

 - a96de756-275e-4e64-a9c9-c8cc221f2bac
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:818a3dbe-9cfa-4355-b4b6-c2dda51beef5

 - ba422faf-25df-45d0-80aa-c4cad166fcbd
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - df419136-ebcc-4087-8b5c-2c5a29f0dcfd
   status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE

   image:c5cc464e-eb71-4edf-a780-60180c592a6f

 - 5734df23-de67-41a8-88a1-423cecfe7260
   status: OK, voltype: INTERNAL, format: COW, legality: LEGAL,
type: SPARSE

 - 

[ovirt-users] Re: Gluster - performance.strict-o-direct and other performance tuning in different storage backends

2019-02-25 Thread Krutika Dhananjay
Gluster's write-behind translator by default buffers writes for flushing to
disk later, *even* when the file is opened with O_DIRECT flag. Not honoring
O_DIRECT could mean a reader from another client could be READing stale
data from bricks because some WRITEs may not yet be flushed to disk.
performance.strict-o-direct=on is one of the options needed to truly honor
O_DIRECT behavior which is what qemu uses by virtue of cache=none option
being set (the other being network.remote-dio=off) on the vm(s)

-Krutika


On Mon, Feb 25, 2019 at 2:50 PM Leo David  wrote:

> Hello Everyone,
> As per some previous posts,  this "performance.strict-o-direct=on"
> setting caused trouble or poor vm iops.  I've noticed that this option is
> still part of default setup or automatically configured with
> "Optimize for virt. store" button.
> In the end... is this setting a good or a bad practice for setting the vm
> storage volume ?
> Does it depends ( like maybe other gluster performance options ) on the
> storage backend:
> - raid type /  jbod
> - raid controller cache size
> I am usually using jbod disks attached to lsi hba card ( no cache ). Any
> gluster recommendations regarding this setup ?
> Is there any documentation for best practices on configurating ovirt's
> gluster for different types of storage backends ?
> Thank you very much !
>
> Have a great week,
>
> Leo
>
> --
> Best regards, Leo David
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FKL42JSHIKPMKLLMDPKYM4XT4V5GT4W/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VS764WDBR2PLGGDZVRGBEM6OJCAFEM3R/


[ovirt-users] VM creation using python api specifying an appropriate timezone

2019-02-25 Thread ckennedy288
Hi, I'm looking for  a way to accurately specify the timezone of a VM at VM 
creation time.
At the moment all VMs that get created using a simple in-house python script 
are set with a GMT timezone e.g.:

if ( "windows" in OS ):
vm_tz="GMT Standard Time"
else:
vm_tz="Etc/GMT"


Is there a better way to choose a more appropriate timezone based on the VMs 
global location?
is there a way to get a list of ovirt supported time zones for Windows and 
Linux and compare that with the VMs location and OS type and then choose an 
appropriate timezone?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OUFR773BF4SSBMXBZMYQIUDCUZS6Y2HU/


[ovirt-users] Re: Problem with snapshots in illegal status

2019-02-25 Thread Shani Leviim
Hi Bruno,
Can you please share the output of:
vdsm-tool dump-volume-chains 

Also, can you see those images on the 'images' table?


*Regards,*

*Shani Leviim*


On Mon, Feb 25, 2019 at 9:32 AM Bruno Rodriguez  wrote:

> Good morning, Shani
>
> I'm not trying to deactivate any disk because the VM using it is working.
> I can't turn it off because I'm pretty sure if I do I won't be able to turn
> it on again, in fact the web interface is telling me that if I turn it off
> possibly it won't restart :(
>
> For what I can check I have no information about any of the the snapshots
> I provided in the database
>
> engine=# select * from snapshots where snapshot_id IN
> ('f649d9c1-563e-49d4-9fad-6bc94abc279b',
> '5734df23-de67-41a8-88a1-423cecfe7260',
> 'f649d9c1-563e-49d4-9fad-6bc94abc279b',
> '2929df28-eae8-4f27-afee-a984fe0b07e7',
> '4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f',
> 'fbaff53b-30ce-4b20-8f10-80e70becb48c',
> 'c628386a-da6c-4a0d-ae7d-3e6ecda27d6d',
> 'e9ddaa5c-007d-49e6-8384-efefebb00aa6',
> '5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4',
> '7efe2e7e-ca24-4b27-b512-b42795c79ea4');
>  snapshot_id | vm_id | snapshot_type | status | description |
> creation_date | app_list | vm_configuration | _create_date | _update_date |
> memory_volume | memory_metadata_disk_id | memory_dump_disk_id | vm_conf
> iguration_broken
>
> -+---+---++-+---+--+--+--+--+---+-+-+
> -
> (0 rows)
>
>
> Thank you
>
>
> On Sun, Feb 24, 2019 at 12:16 PM Shani Leviim  wrote:
>
>> Hi Bruno,
>>
>> It seems that the disk you're trying to deactivate is in use ( Logical
>> volume
>> e655abce-c5e8-44f3-8d50-9fd76edf05cb/fa154782-0dbb-45b5-ba62-d6937259f097
>> in use).
>> Is there any task that uses that disk?
>>
>> Also, did you try to verify the snapshot's creation date with the DB?
>> ( select * from snapshots; )
>>
>>
>> *Regards*
>>
>> *Shani Leviim*
>>
>>
>> On Fri, Feb 22, 2019 at 6:08 PM Bruno Rodriguez  wrote:
>>
>>> Hello,
>>>
>>> We are experiencing some problems with some snapshots in illegal status
>>> generated with the python API. I think I'm not the only one, and that is
>>> not a relief but I hope someone can help about it.
>>>
>>> I'm a bit scared because, for what I see, the creation date in the
>>> engine for every snapshot is way different from the date when it was really
>>> created. The name of the snapshot is in the format
>>> backup_snapshot_MMDD-HHMMSS, but as you can see in the following
>>> examples, the stored date is totally random...
>>>
>>> Size
>>> Creation Date
>>> Snapshot Description
>>> Status
>>> Disk Snapshot ID
>>>
>>> 33 GiB
>>> Mar 2, 2018, 5:03:57 PM
>>> backup_snapshot_20190217-011645
>>> Illegal
>>> 5734df23-de67-41a8-88a1-423cecfe7260
>>>
>>> 33 GiB
>>> May 8, 2018, 10:02:56 AM
>>> backup_snapshot_20190216-013047
>>> Illegal
>>> f649d9c1-563e-49d4-9fad-6bc94abc279b
>>>
>>> 10 GiB
>>> Feb 21, 2018, 11:10:17 AM
>>> backup_snapshot_20190217-010004
>>> Illegal
>>> 2929df28-eae8-4f27-afee-a984fe0b07e7
>>>
>>> 43 GiB
>>> Feb 2, 2018, 12:55:51 PM
>>> backup_snapshot_20190216-015544
>>> Illegal
>>> 4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f
>>>
>>> 11 GiB
>>> Feb 13, 2018, 12:51:08 PM
>>> backup_snapshot_20190217-010541
>>> Illegal
>>> fbaff53b-30ce-4b20-8f10-80e70becb48c
>>>
>>> 11 GiB
>>> Feb 13, 2018, 4:05:39 PM
>>> backup_snapshot_20190217-011207
>>> Illegal
>>> c628386a-da6c-4a0d-ae7d-3e6ecda27d6d
>>>
>>> 11 GiB
>>> Feb 13, 2018, 4:38:25 PM
>>> backup_snapshot_20190216-012058
>>> Illegal
>>> e9ddaa5c-007d-49e6-8384-efefebb00aa6
>>>
>>> 11 GiB
>>> Feb 13, 2018, 10:52:09 AM
>>> backup_snapshot_20190216-012550
>>> Illegal
>>> 5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4
>>>
>>> 55 GiB
>>> Jan 22, 2018, 5:02:29 PM
>>> backup_snapshot_20190217-012659
>>> Illegal
>>> 7efe2e7e-ca24-4b27-b512-b42795c79ea4
>>>
>>>
>>> When I'm getting the logs for the first one, to check what happened to
>>> it, I get the following
>>>
>>> 2019-02-17 01:16:45,839+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default
>>> task-100) [96944daa-c90a-4ad7-a556-c98e66550f87] START,
>>> CreateVolumeVDSCommand(
>>> CreateVolumeVDSCommandParameters:{storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
>>> ignoreFailoverLimit='false',
>>> storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
>>> imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
>>> imageSizeInBytes='32212254720', volumeFormat='COW',
>>> newImageId='fa154782-0dbb-45b5-ba62-d6937259f097', imageType='Sparse',
>>> newImageDescription='', imageInitialSizeInBytes='0',
>>> imageId='5734df23-de67-41a8-88a1-423cecfe7260',
>>> sourceImageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f'}), log id:
>>> 497c168a
>>> 2019-02-17 01:18:26,506+01 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
>>> (default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] 

[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-25 Thread Niels de Vos
On Mon, Feb 25, 2019 at 10:05:52AM +0100, Sandro Bonazzola wrote:
> Il giorno gio 21 feb 2019 alle ore 08:48 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
> 
> >
> >
> > Il giorno mer 20 feb 2019 alle ore 21:02 Strahil Nikolov <
> > hunter86...@yahoo.com> ha scritto:
> >
> >> Hi Sahina, Sandro,
> >>
> >> can you guide me through the bugzilla.redhat.com in order to open a bug
> >> for the missing package. ovirt-4.3-centos-gluster5 still lacks package
> >> for the 'glusterfs-gnfs' (which is a dependency for vdsm-gluster):
> >>
> >
> >
> > The issue is being tracked in
> > https://bugzilla.redhat.com/show_bug.cgi?id=1672711
> > it's still missing a fix on gluster side for CentOS Storage SIG.
> > Niels, Yaniv, we are escalating this, can you please help getting this
> > fixed?
> >
> 
> Update for users: a fix has been merged for Gluster 6 and the backport to
> Gluster 5 has been verified, reviewed and pending merge (
> https://review.gluster.org/#/c/glusterfs/+/22258/).

I plan to push an update with only this change later today. That means
the updated glusterfs-5 version is expected to hit the CentOS mirrors
tomorrow.

Niels


> 
> 
> 
> >
> >
> >
> >
> >>
> >> [root@ovirt2 ~]# yum --disablerepo=*
> >> --enablerepo=ovirt-4.3-centos-gluster5 list available --show-duplicates |
> >> grep gluster-gnfs
> >> Repository centos-sclo-rh-release is listed more than once in the
> >> configuration
> >> Repository centos-sclo-rh-release is listed more than once in the
> >> configuration
> >> Cannot upload enabled repos report, is this client registered?
> >>
> >>
> >> This leads to the following (output truncated):
> >> --> Finished Dependency Resolution
> >> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>Requires: glusterfs(x86-64) = 3.12.15-1.el7
> >>Removing: glusterfs-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.15-1.el7
> >>Updated By: glusterfs-5.3-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.3-1.el7
> >>Available: glusterfs-3.12.0-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.0-1.el7
> >>
> >>Available: glusterfs-3.12.1-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.1-1.el7
> >>Available: glusterfs-3.12.1-2.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.1-2.el7
> >>Available: glusterfs-3.12.2-18.el7.x86_64 (base)
> >>glusterfs(x86-64) = 3.12.2-18.el7
> >>Available: glusterfs-3.12.3-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.3-1.el7
> >>Available: glusterfs-3.12.4-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.4-1.el7
> >>Available: glusterfs-3.12.5-2.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.5-2.el7
> >>Available: glusterfs-3.12.6-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.6-1.el7
> >>Available: glusterfs-3.12.8-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.8-1.el7
> >>Available: glusterfs-3.12.9-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.9-1.el7
> >>Available: glusterfs-3.12.11-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.11-1.el7
> >>Available: glusterfs-3.12.13-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.13-1.el7
> >>Available: glusterfs-3.12.14-1.el7.x86_64
> >> (ovirt-4.2-centos-gluster312)
> >>glusterfs(x86-64) = 3.12.14-1.el7
> >>Available: glusterfs-5.0-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.0-1.el7
> >>Available: glusterfs-5.1-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.1-1.el7
> >>Available: glusterfs-5.2-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs(x86-64) = 5.2-1.el7
> >> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>Requires: glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
> >>Removing: glusterfs-client-xlators-3.12.15-1.el7.x86_64
> >> (@ovirt-4.2-centos-gluster312)
> >>glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
> >>
> >>Updated By: glusterfs-client-xlators-5.3-1.el7.x86_64
> >> (ovirt-4.3-centos-gluster5)
> >>glusterfs-client-xlators(x86-64) = 5.3-1.el7
> >>Available: glusterfs-client-xlators-3.12.0-1.el7.x86_64
> >> 

[ovirt-users] oVirt-4.2: HE rebooting continuously and 2 hosts non-operatinal

2019-02-25 Thread Parth Dhanjal
Hey!

I'm facing a problem with oVirt-4.2 HE. The HE is rebooting continuously. I
tried accessing the VM through the VNC console. But it is not responding. 2
hosts are also Non-Operational. I tried debugging, but couldn't resolve it
Though I found a few errors. Error in the agent log for one of the
non-operational hosts - http://pastebin.test.redhat.com/721991 Error in the
broker log - http://pastebin.test.redhat.com/721990 Also, I couldn't see
the brick mounted on the non-op hosts. Even though the volume status seems
fine.
Anyone knows why is this issue occurring?

Regards
Parth Dhanjal
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S6ND46ZYHK4HJWFIV4FFHRCKDVCKJLRO/


[ovirt-users] Re: Info about AD integration and Root CA

2019-02-25 Thread Gianluca Cecchi
On Sat, Feb 23, 2019 at 5:33 PM Ondra Machacek  wrote:

> Hi,
>
> Sorry, but this seems to be Active directory specific issue. I would
> suggest to ask on some Microsoft AD specific forum for such issue.
>
>
I'm far from being an AD expert, but digging a bit it seems that actually
the question seems more wider.
In the sense that deploying certificate and opening ldap services for bind
and authenticate is an optional thing on Windows domain.
And in my case the domain where I have to join doesn't have them deployed.
I found an interesting blog here:
https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx

Some extract about LDAP activation notes:
"

By default, LDAP communications between client and server applications are
not encrypted. This means that it would be possible to use a network
monitoring  device or software and view the communications traveling
between LDAP client and server computers. This is especially problematic
when an LDAP simple bind is used because credentials (username and
password) is passed over the network unencrypted. This could quickly lead
to the compromise of credentials.
. . .
Note:
Only LDAP data transfers are exposed. Other authentication or authorization
data using Kerberos, SASL, and even NTLM have their own encryption systems.
The Microsoft Management Console (mmc) snap-ins, since Windows 2000 SP4
have used LDAP sign and seal  or Simple Authentication and Security Layer
(SASL)  and replication between domain controllers is encrypted using
Kerberos  .
"

So the situation is that oVirt/RHV can currently interact with AD only
through LDAP bind that travels in clear by default on AD, from which the
need to enroll certificate on AD and enabling ldaps or StartTLS
It could be interesting to enable other means of AD integration, like
vSphere already does, joining the AD domain and so using native encrypted
SSO communications.
An interesting article here from Nakivo:
https://www.nakivo.com/blog/vmware-vsphere-active-directory-integration/

Any ongoing effort to go in this direction? Samba could join with minimal
effort a Windows domain, I think...
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZNZQ6LCSSCBTMRYCRHWD3EUDK7GFZO4/


[ovirt-users] Re: HostedEngine Unreachable

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 10:27 AM Sakhi Hadebe  wrote:

> Hi Simone,
>
> Thank you for your response. Executing the command below, gives this:
>
> [root@ovirt-host]# curl http://$(grep fqdn 
> /etc/ovirt-hosted-engine/hosted-engine.conf
> | cut -d= -f2)/ovirt-engine/services/health
> curl: (7) Failed to connect to engine.sanren.ac.za:80; No route to host
>
> I tried to enable http traffic on the ovirt-host, but the error persists
>

Run, on your hosts,
getent ahosts engine.sanren.ac.za

and ensure that it got resolved as you wish.
Fix name resolution and routing on your hosts in a coherent manner.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K5FK56WDIDXFRPEIZTELV4BSFL7D4L6R/


[ovirt-users] Re: HostedEngine Unreachable

2019-02-25 Thread Sakhi Hadebe
Hi Simone,

Thank you for your response. Executing the command below, gives this:

[root@ovirt-host]# curl http://$(grep fqdn
/etc/ovirt-hosted-engine/hosted-engine.conf
| cut -d= -f2)/ovirt-engine/services/health
curl: (7) Failed to connect to engine.sanren.ac.za:80; No route to host

I tried to enable http traffic on the ovirt-host, but the error persists
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LK2FBU7PODOKTQONBUAX3UGQ5ZDRI5NA/


[ovirt-users] Infiniband Usage / Support

2019-02-25 Thread Andrey Rusakov
Hi,

I got a config with IB Storage (IPoIB).
I would like to extend IB usage to OVN network, but trying to do this i find 
some problems ...

Problem #1 (IPoIB Storage Network)
I configure IPoIB Before adding to oVirt cluster (I did it using Cockpit)
Create [Bond], assign IP, change MTU.
At this moment of time everything is fine...
But - I am not able to assign IP/MTU using oVirt Cluster.
And I can't perform any changes using Cockpit as IB interfaces are "Unmanaged 
Interfaces".

Problem #1.1
It will be great to avoid users adding IPoIB networks using brctl, as it will 
not work...

Problem #2
What is correct way to move OVN to IPoIB network?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/354DAJTA4ZJXM2WKIKWBQ4JAJVBKCVHT/


[ovirt-users] Re: Problem deploying self-hosted engine on ovirt 4.3.0

2019-02-25 Thread matteo fedeli
oh, ovirt-engine-appliance where I can found?

At the end I wait in total 3 hour (is not too much?) and the deploy arrived 
almost at the end but in the last step i get this error:
http://oi64.tinypic.com/312yfev.jpg. To retry i do 
/usr/sbin/ovirt-hosted-engine-cleanup and cleaned engine folder... I add that I 
seen varius skipping...

But now I get this: http://oi64.tinypic.com/kbpxzl.jpg. Before the second 
deploy the node status and gluster was all ok...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/67G4NISLNTKKW6QIJADAZ5GXH2ZHZXG2/


[ovirt-users] Gluster - performance.strict-o-direct and other performance tuning in different storage backends

2019-02-25 Thread Leo David
Hello Everyone,
As per some previous posts,  this "performance.strict-o-direct=on" setting
caused trouble or poor vm iops.  I've noticed that this option is still
part of default setup or automatically configured with
"Optimize for virt. store" button.
In the end... is this setting a good or a bad practice for setting the vm
storage volume ?
Does it depends ( like maybe other gluster performance options ) on the
storage backend:
- raid type /  jbod
- raid controller cache size
I am usually using jbod disks attached to lsi hba card ( no cache ). Any
gluster recommendations regarding this setup ?
Is there any documentation for best practices on configurating ovirt's
gluster for different types of storage backends ?
Thank you very much !

Have a great week,

Leo

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7FKL42JSHIKPMKLLMDPKYM4XT4V5GT4W/


[ovirt-users] Re: Ovirt 4.3 RC missing glusterfs-gnfs

2019-02-25 Thread Sandro Bonazzola
Il giorno gio 21 feb 2019 alle ore 08:48 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno mer 20 feb 2019 alle ore 21:02 Strahil Nikolov <
> hunter86...@yahoo.com> ha scritto:
>
>> Hi Sahina, Sandro,
>>
>> can you guide me through the bugzilla.redhat.com in order to open a bug
>> for the missing package. ovirt-4.3-centos-gluster5 still lacks package
>> for the 'glusterfs-gnfs' (which is a dependency for vdsm-gluster):
>>
>
>
> The issue is being tracked in
> https://bugzilla.redhat.com/show_bug.cgi?id=1672711
> it's still missing a fix on gluster side for CentOS Storage SIG.
> Niels, Yaniv, we are escalating this, can you please help getting this
> fixed?
>

Update for users: a fix has been merged for Gluster 6 and the backport to
Gluster 5 has been verified, reviewed and pending merge (
https://review.gluster.org/#/c/glusterfs/+/22258/).



>
>
>
>
>>
>> [root@ovirt2 ~]# yum --disablerepo=*
>> --enablerepo=ovirt-4.3-centos-gluster5 list available --show-duplicates |
>> grep gluster-gnfs
>> Repository centos-sclo-rh-release is listed more than once in the
>> configuration
>> Repository centos-sclo-rh-release is listed more than once in the
>> configuration
>> Cannot upload enabled repos report, is this client registered?
>>
>>
>> This leads to the following (output truncated):
>> --> Finished Dependency Resolution
>> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
>> (@ovirt-4.2-centos-gluster312)
>>Requires: glusterfs(x86-64) = 3.12.15-1.el7
>>Removing: glusterfs-3.12.15-1.el7.x86_64
>> (@ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.15-1.el7
>>Updated By: glusterfs-5.3-1.el7.x86_64
>> (ovirt-4.3-centos-gluster5)
>>glusterfs(x86-64) = 5.3-1.el7
>>Available: glusterfs-3.12.0-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.0-1.el7
>>
>>Available: glusterfs-3.12.1-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.1-1.el7
>>Available: glusterfs-3.12.1-2.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.1-2.el7
>>Available: glusterfs-3.12.2-18.el7.x86_64 (base)
>>glusterfs(x86-64) = 3.12.2-18.el7
>>Available: glusterfs-3.12.3-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.3-1.el7
>>Available: glusterfs-3.12.4-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.4-1.el7
>>Available: glusterfs-3.12.5-2.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.5-2.el7
>>Available: glusterfs-3.12.6-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.6-1.el7
>>Available: glusterfs-3.12.8-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.8-1.el7
>>Available: glusterfs-3.12.9-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.9-1.el7
>>Available: glusterfs-3.12.11-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.11-1.el7
>>Available: glusterfs-3.12.13-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.13-1.el7
>>Available: glusterfs-3.12.14-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs(x86-64) = 3.12.14-1.el7
>>Available: glusterfs-5.0-1.el7.x86_64
>> (ovirt-4.3-centos-gluster5)
>>glusterfs(x86-64) = 5.0-1.el7
>>Available: glusterfs-5.1-1.el7.x86_64
>> (ovirt-4.3-centos-gluster5)
>>glusterfs(x86-64) = 5.1-1.el7
>>Available: glusterfs-5.2-1.el7.x86_64
>> (ovirt-4.3-centos-gluster5)
>>glusterfs(x86-64) = 5.2-1.el7
>> Error: Package: glusterfs-gnfs-3.12.15-1.el7.x86_64
>> (@ovirt-4.2-centos-gluster312)
>>Requires: glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
>>Removing: glusterfs-client-xlators-3.12.15-1.el7.x86_64
>> (@ovirt-4.2-centos-gluster312)
>>glusterfs-client-xlators(x86-64) = 3.12.15-1.el7
>>
>>Updated By: glusterfs-client-xlators-5.3-1.el7.x86_64
>> (ovirt-4.3-centos-gluster5)
>>glusterfs-client-xlators(x86-64) = 5.3-1.el7
>>Available: glusterfs-client-xlators-3.12.0-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs-client-xlators(x86-64) = 3.12.0-1.el7
>>Available: glusterfs-client-xlators-3.12.1-1.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs-client-xlators(x86-64) = 3.12.1-1.el7
>>Available: glusterfs-client-xlators-3.12.1-2.el7.x86_64
>> (ovirt-4.2-centos-gluster312)
>>glusterfs-client-xlators(x86-64) = 3.12.1-2.el7
>>Available: 

[ovirt-users] Re: HostedEngine Unreachable

2019-02-25 Thread Simone Tiraboschi
On Mon, Feb 25, 2019 at 8:07 AM Sakhi Hadebe  wrote:

> Hi,
>
>  Our cluster was running fine, until we moved it to the new network.
> Looking at the agent.log file, it stills pings the old gateway. Not sure if
> this is the reason it's failing the liveliness check.
>

You can update the gateway address used in network test with something
like:
hosted-engine --set-shared-config gateway 192.168.1.1 --type=he_local
hosted-engine --set-shared-config gateway 192.168.1.1 --type=he_shared

The first command will update the value used on the current host; the
second one will update the master copy on the shared storage use by default
on the host you are going to deploy in the future.

But engine health check doesn't really depend on that: it's a kind of
application level ping sent over http to the engine.
You can manually check it with:
curl http://$(grep fqdn /etc/ovirt-hosted-engine/hosted-engine.conf | cut
-d= -f2)/ovirt-engine/services/health

I'd suggest to check name resolution.


>
> Please help.
>
> On Thu, Feb 21, 2019 at 4:39 PM Sakhi Hadebe  wrote:
>
>> Hi,
>>
>> I need some help. We had a working ovirt cluster in the testing
>> environment. We have just moved it to the production environment with the
>> same network settings. The only thing we changed is the public VLAN. In
>> production we're using a different subnet.
>>
>> The problem is we can't get the HostedEngine Up. It does come up but it
>> fails the LIVELINESS CHECK and its health status is bad. We can't ping even
>> ping it. It is on the same subnet as host machines: 192.168.x.x/24:
>>
>> *HostedEngine VM status:*
>>
>> [root@garlic qemu]# hosted-engine --vm-status
>>
>>
>> --== Host 1 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : True
>> Hostname   : goku.sanren.ac.za
>> Host ID: 1
>> Engine status  : {"reason": "vm not running on this
>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 57b2ece9
>> local_conf_timestamp   : 8463
>> Host timestamp : 8463
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=8463 (Thu Feb 21 16:32:29 2019)
>> host-id=1
>> score=3400
>> vm_conf_refresh_time=8463 (Thu Feb 21 16:32:29 2019)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=EngineDown
>> stopped=False
>>
>>
>> --== Host 2 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : True
>> Hostname   : garlic.sanren.ac.za
>> Host ID: 2
>> Engine status  : {"reason": "failed liveliness
>> check", "health": "bad", "vm": "up", "detail": "Powering down"}
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 71dc3daf
>> local_conf_timestamp   : 8540
>> Host timestamp : 8540
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=8540 (Thu Feb 21 16:32:31 2019)
>> host-id=2
>> score=3400
>> vm_conf_refresh_time=8540 (Thu Feb 21 16:32:31 2019)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=EngineStop
>> stopped=False
>> timeout=Thu Jan  1 04:24:29 1970
>>
>>
>> --== Host 3 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date  : True
>> Hostname   : gohan.sanren.ac.za
>> Host ID: 3
>> Engine status  : {"reason": "vm not running on this
>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>> Score  : 3400
>> stopped: False
>> Local maintenance  : False
>> crc32  : 49645620
>> local_conf_timestamp   : 5480
>> Host timestamp : 5480
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=5480 (Thu Feb 21 16:32:22 2019)
>> host-id=3
>> score=3400
>> vm_conf_refresh_time=5480 (Thu Feb 21 16:32:22 2019)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=EngineDown
>> stopped=False
>> You have new mail in /var/spool/mail/root
>>
>> The service are running but with errors:
>> *vdsmd.service:*
>> [root@garlic qemu]# systemctl status vdsmd
>> ● vdsmd.service - Virtual 

[ovirt-users] Re: Expand existing gluster storage in ovirt 4.2/4.3

2019-02-25 Thread Sahina Bose
On Thu, Feb 21, 2019 at 8:47 PM  wrote:
>
> Hello,
> I have a 3 node ovirt 4.3 cluster that I've setup and using gluster 
> (Hyperconverged setup)
> I need to increase the amount of storage and compute so I added a 4th host 
> (server4.example.com) if it is possible to expand the amount of bricks 
> (storage) in the "data" volume?
>
> I did some research and from that research the following caught my eye: old 
> post 
> "https://medium.com/@tumballi/scale-your-gluster-cluster-1-node-at-a-time-62dd6614194e;
> So the question is, if taking that approach feasible? , is it even possible 
> an oVirt point of view?
>

Expanding by 1 node is only possible if you have sufficient space on
the existing nodes in order to create a replica 2 + arbiter volume.
The post talks of how you can create your volumes in a way that you
can move the bricks around so that the each of the bricks in a replica
set reside on a separate server. We do not have an automatic way to
provision and rebalance the bricks yet in order to expand by 1 node.

>
>
> ---
> My gluster volume:
> ---
> Volume Name: data
> Type: Replicate
> Volume ID: 003ffea0-b441-43cb-a38f-ccdf6ffb77f8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: server1.example1.com:/gluster_bricks/data/data
> Brick2: server2.example.com:/gluster_bricks/data/data
> Brick3: server3.example.com:/gluster_bricks/data/data (arbiter)
> Options Reconfigured:
> cluster.granular-entry-heal: enable
> performance.strict-o-direct: on
> network.ping-timeout: 30
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.choose-local: off
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> cluster.eager-lock: enable
> network.remote-dio: enable
> performance.low-prio-threads: 32
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HKPAM65CSDJ7LQTZTAUQSBDOFDZM7RQS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4NNGYOLCFJ4SN2QEO34TSWYGSKBVGEL2/


[ovirt-users] Re: Stuck completing last step of 4.3 upgrade

2019-02-25 Thread Sahina Bose
On Thu, Feb 21, 2019 at 11:11 PM Jason P. Thomas  wrote:
>
> On 2/20/19 5:33 PM, Darrell Budic wrote:
>
> I was just helping Tristam on #ovirt with a similar problem, we found that 
> his two upgraded nodes were running multiple glusterfsd processes per brick 
> (but not all bricks). His volume & brick files in /var/lib/gluster looked 
> normal, but starting glusterd would often spawn extra fsd processes per 
> brick, seemed random. Gluster bug? Maybe related to  
> https://bugzilla.redhat.com/show_bug.cgi?id=1651246, but I’m helping debug 
> this one second hand… Possibly related to the brick crashes? We wound up 
> stopping glusterd, killing off all the fsds, restarting glusterd, and 
> repeating until it only spawned one fsd per brick. Did that to each updated 
> server, then restarted glusterd on the not-yet-updated server to get it 
> talking to the right bricks. That seemed to get to a mostly stable gluster 
> environment, but he’s still seeing 1-2 files listed as needing healing on the 
> upgraded bricks (but not the 3.12 brick). Mainly the DIRECT_IO_TEST and one 
> of the dom/ids files, but he can probably update that. Did manage to get his 
> engine going again, waiting to see if he’s stable now.
>
> Anyway, figured it was worth posting about so people could check for multiple 
> brick processes (glusterfsd) if they hit this stability issue as well, maybe 
> find common ground.
>
> Note: also encountered https://bugzilla.redhat.com/show_bug.cgi?id=1348434 
> trying to get his engine back up, restarting libvirtd let us get it going 
> again. Maybe un-needed if he’d been able to complete his third node upgrades, 
> but he got stuck before then, so...
>
>   -Darrell
>
> Stable is a relative term.  My unsynced entries total for each of my 4 
> volumes changes drastically (with the exception of the engine volume, it 
> pretty much bounces between 1 and 4).  The cluster has been "healing" for 18 
> hours or so and only the unupgraded HC node has healed bricks.  I did have 
> the problem that some files/directories were owned by root:root.  These VMs 
> did not boot until I changed ownership to 36:36.  Even after 18 hours, 
> there's anywhere from 20-386 entries in vol heal info for my 3 non engine 
> bricks.  Overnight I had one brick on one volume go down on one HC node.  
> When I bounced glusterd, it brought up a new fsd process for that brick.  I 
> killed the old one and now vol status reports the right pid on each of the 
> nodes.  This is quite the debacle.  If I can provide any info that might help 
> get this debacle moving in the right direction, let me know.

Can you provide the gluster brick logs and glusterd logs from the
servers (from /var/log/glusterfs/). Since you mention that heal seems
to be stuck, could you also provide the heal logs from
/var/log/glusterfs/glustershd.log
If you can log a bug with these logs, that would be great - please use
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS to log the
bug.


>
> Jason aka Tristam
>
>
> On Feb 14, 2019, at 1:12 AM, Sahina Bose  wrote:
>
> On Thu, Feb 14, 2019 at 2:39 AM Ron Jerome  wrote:
>
>
>
>
> Can you be more specific? What things did you see, and did you report bugs?
>
>
> I've got this one: https://bugzilla.redhat.com/show_bug.cgi?id=1649054
> and this one: https://bugzilla.redhat.com/show_bug.cgi?id=1651246
> and I've got bricks randomly going offline and getting out of sync with the 
> others at which point I've had to manually stop and start the volume to get 
> things back in sync.
>
>
> Thanks for reporting these. Will follow up on the bugs to ensure
> they're addressed.
> Regarding brciks going offline - are the brick processes crashing? Can
> you provide logs of glusterd and bricks. Or is this to do with
> ovirt-engine and brick status not being in sync?
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3RVMLCRK4BWCSBTWVXU2JTIDBWU7WEOP/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PKJSVDIH3V4H7Q2RKS2C4ZUMWDODQY6/
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: 
> 

[ovirt-users] Re: Add compute nodes offline

2019-02-25 Thread Yedidyah Bar David
On Mon, Feb 25, 2019 at 9:42 AM  wrote:
>
> Hi, when I add compute nodes, I do not intend to add compute nodes to the 
> engine controller without connecting to the external network:
> An error has occurred during installation of Host node5.test.com: Failed to 
> execute stage 'Environment customization': Cannot retrieve metalink for 
> repository:
> Please verify its path and try again. Do I need to manually download some 
> offline packages in this case?

Are you using ovirt-node, or plain os (el7)?

The above error seems to be a result of bad yum repo file(s).

You can simply remove it/them.

If you use node, you should be set.

If plain OS, you can try installing beforehand 'ovirt-host', I think
this should be enough. If it's not, please check/share host-deploy
logs - you should find there which packages are missing.

Obviously, the engine will then not be able to know if there updates,
so upgrades will be up to you to handle.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LSE6CZMH6JOPSNA2U3UZ3YYXC77ZCOG3/


[ovirt-users] Re: Move infrastructure ,how to change FQDN

2019-02-25 Thread Yedidyah Bar David
On Sun, Feb 24, 2019 at 9:01 PM Fabrice SOLER <
fabrice.so...@ac-guadeloupe.fr> wrote:

> Hello,
>
> I need to move all the physical infrastructure  oVirt to another site (for
> student education).
>
> The node's FQDN and the hosted engine's FQDN must change.
>
> The version is ovirt 4.2.8 for the hosted engine and nodes.
>
> Is there sommone who know how to do ?
>

To change the engine FQDN, check these:

https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html#the-ovirt-engine-rename-tool

https://www.ovirt.org/develop/networking/changing-engine-hostname.html

If you think it looks as if the first contains several workarounds, you are
right. We have several open bugs about this tool. But the warning tone (and
text) of the latter page will still stand, even after we fix them.

Not sure how to change a node FQDN - perhaps we have docs for that. You can
always (if you have more than one node) move it to maintenance, remove it,
and re-add with a new name.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7OI4EDCEM7O6527GFF6L2EXMWLWY3CXY/