[ovirt-users] Re: Disk move fails - Invalid parameter: 'initial size=

2018-10-09 Thread Eyal Shenitzky
Can you please share the VDSM and the Engine version?

On Tue, Oct 9, 2018 at 5:34 PM Simon Vincent  wrote:

> I am trying to move a disk to another data domain but it always fails with
> the following error.
>
> ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-77) [] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM blade15.crt.lan command
> HSMGetAllTasksStatusesVDS failed: Error creating a new volume: (u"Volume
> creation e6171aae-2c5b-4c91-84fc-506c0e835928 failed: Invalid parameter:
> 'initial size=122016117'",)
>
> It sounds a bit like this bug
> https://bugzilla.redhat.com/show_bug.cgi?id=1625240
>
> Does anyone know how to work around this problem?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQY7JRIAU6IKH6IOIQFCMCIGKPGBTP3L/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JQO7SUGHWP7HBZSIXO5NFMZCTIDMN2WU/


[ovirt-users] Re: Diary of hosted engine install woes

2018-10-09 Thread Brendan Holmes
Hi Simone,

 

Yes the MAC address in answers.conf: OVEHOSTED_VM/vmMACAddr=

is added as a reservation to the DHCP server, so in theory 10.0.0.109 should be 
assigned.  

 

However perhaps DHCP is not working.  I have just changed to a static IP 
instead:

OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.0.0.109/24

(let me know if this isn’t the correct way)

 

My host fails to get an IP automatically from this DHCP server, so it is quite 
possible engine’s DHCP has been failing too.  Each time the host boots, I must 
type dhclient in order to receive an IP address.  Anyway, after changing this 
and re-running hosted-engine –deploy, failed due to:

 

[ INFO  ] TASK [Copy local VM disk to shared storage]

[ INFO  ] changed: [localhost]

[ INFO  ] TASK [show local_vm_ip.std_out_lines[0] that will be written to etc 
hosts]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [show FQDN]

[ INFO  ] ok: [localhost]

[ INFO  ] TASK [Clean /etc/hosts on the host]

[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option 
with an undefined variable. The error was: list object has no element 0\n\nThe 
error appears to have been in 
'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 400, 
column 5, but may\nbe elsewhere in the file depending on the exact syntax 
problem.\n\nThe offending line appears to be:\n\ndebug: var=FQDN\n  - name: 
Clean /etc/hosts on the host\n^ here\n"}

 

I have just tried deploying using the webui, same error.  I suspect the 
“undefined variable” is local_vm_ip.std_out_lines[0].  My new debug task that 
tries to output this is:

  - name: show local_vm_ip.std_out_lines[0] that will be written to etc hosts

debug: var=local_vm_ip.stdout_lines[0]

 

You can see the output of this above.  I think I was mistaken to suggest the 
value of this is localhost.  Localhost is just the machine this task ran on.  I 
don’t think list local_vm_ip.std_out_lines is defined.  Any more ideas?

 

Many thanks

 

From: Simone Tiraboschi  
Sent: 09 October 2018 16:51
To: B Holmes 
Cc: users 
Subject: Re: [ovirt-users] Re: Diary of hosted engine install woes

 

 

On Tue, Oct 9, 2018 at 4:54 PM mailto:m...@brendanh.com> > 
wrote:

I'ved added a record to the DNS server here:
ovirt-engine.example.com    10.0.0.109

 

OK, and how the engine VM will get that address?

Are you using DHCP? do you have a DHCP reservation for the MAC address you are 
using on the engine VM?

Are you configuring it with a static IP?

 


This IP address is on the physical network that the host is on (host is on 
10.0.0.171).  I trust this is correct and I should not resolve to a natted IP 
instead.  I notice that regardless of this record, the name 
ovirt-engine.example.com   resolves to a 
natted IP: 192.168.124.51 because the ansible script adds an entry to 
/etc/hosts:
192.168.124.51  ovirt-engine.example.com  
While the script is running, if I I can successfully ping 
ovirt-engine.example.com  , it responds on 
192.168.124.51.  So as you say: "host can correctly resolve the name of the 
engine VM", but it's not the DNS record's IP.  If I remove the DNS record and 
run hosted-engine --deploy, I get error:
[ ERROR ] Host name is not valid: ovirt-engine.example.com 
  did not resolve into an IP address

Anyway, I added back the DNS record and ran hosted-engine --deploy command, it 
failed at:
[ INFO  ] TASK [Clean /etc/hosts on the host]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option 
with an undefined variable. The error was: list object has no element 0\n\nThe 
error appears to have been in 
'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 396, 
column 5, but may\nbe elsewhere in the file depending on the exact syntax 
problem.\n\nThe offending line appears to be:\n\nchanged_when: True\n  - 
name: Clean /etc/hosts on the host\n^ here\n"}

To debug, I added tasks to create_target_vm.yml that output the values of 
local_vm_ip.std_out_lines[0] and FQDN that are used in this task, then ran the 
usual deploy command again.  They are both localhost:
[ INFO  ] TASK [show local_vm_ip.std_out_lines[0] that will be written to etc 
hosts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [show FQDN]
[ INFO  ] ok: [localhost]

This time, it gets past [Clean /etc/hosts on the host], but hangs at [ INFO  ] 
TASK [Check engine VM health] same as before.

 

This is fine, the bootstrap local VM runs over a natted network then, once 
ready it will be shutdown and moved to the shared storage. At that point it 
will be restarted on your management network.

 

  I catted /etc/hosts while it was hanging and it contains:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

The ovirt-engi

[ovirt-users] Power Management connection to Citrix Studio

2018-10-09 Thread jtxtm25
Hello,

Making a transition into Ovirt from XenServer. Does anyone know of any 
resources around configuring Ovirt with Citrix Studio?

Referencing:
Studio > Configuration > Hosting > Add Connection and Resources
Connection Types
   Citrix XenServer
   Microsoft System Center Virtual Machine Manager
   VMware vSphere
   CloudPlatform
   Microsoft Azure
   Microsoft Azure Classic
   Amazon EC2
   Microsoft Configuration Manager Wake on LAN

Mapping to the hypervisor from Studio is required to take advantage of power 
management features. Has anyone found a way to map directly to Ovirt? Thanks,

Jake
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BRYNYMRZV65RRCXAH6Q2AWNCPVE644PH/


[ovirt-users] Re: IPoIB broken with ovirt 4.2.6

2018-10-09 Thread Giulio Casella
> Thanks for the heads up! We are preparing oVirt 4.2.7 RC2 today,
> I'll issue a oVirt Node 4.2.6 Async 2 in parallel, should both go
> out tomorrow.
> 
> 
> Released

You rock!
Upgraded and working fine.

Thanks,
gc
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q6FFXMEYETFPVMRBJSEPMIPMDFY3GTBJ/


[ovirt-users] Re: Network configuration for self-hosted engine deployement oVirt node 4.2

2018-10-09 Thread Simone Tiraboschi
On Tue, Oct 9, 2018 at 4:47 PM Arnaud DEBEC  wrote:

> I try to deploy the engine today and got the following error: "A Network
> interface is required" like in my previous test.
>

Can you please attach the output of
  ansible localhost -m setup
executed on that host?


>
> Node version: virt-node-ng-installer-ovirt-4.2-2018062610
> 1. Install the oVirt node with DNS configured
> 2. Configure bond0 (profile name: bond0, device: bond0) with eno1 and eno5
> with Active Backup policy (to be the ovirtmgmt)
> 3. Configure bond1 (profile name: bond1, device: bond1) with eno2, eno3,
> eno6 and eno7 with Round-Robin policy (iSCSI)
>
> Do you have any idea? Here is the part of the log related to the error:
>
> 2018-10-09 15:20:20,545+0200 DEBUG otopi.context
> context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN
> 2018-10-09 15:20:20,545+0200 DEBUG otopi.context
> context.dumpEnvironment:869 ENV OVEHOSTED_NETWORK/gateway=str:'172.16.51.1'
> 2018-10-09 15:20:20,546+0200 DEBUG otopi.context
> context.dumpEnvironment:869 ENV
> QUESTION/1/OVEHOSTED_GATEWAY=str:'172.16.51.1'
> 2018-10-09 15:20:20,547+0200 DEBUG otopi.context
> context.dumpEnvironment:873 ENVIRONMENT DUMP - END
> 2018-10-09 15:20:20,549+0200 DEBUG otopi.context
> context._executeMethod:128 Stage customization METHOD
> otopi.plugins.gr_he_common.network.bridge.Plugin._customization
> 2018-10-09 15:20:20,551+0200 DEBUG
> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153
> ansible-playbook: cmd: ['/bin/ansible-playbook',
> '--module-path=/usr/share/ovirt-hosted-engine-setup/ansible',
> '--inventory=localhost,', '--extra-vars=@/tmp/tmpnaI1ZJ',
> '/usr/share/ovirt-hosted-engine-setup/ansible/get_network_interfaces.yml']
> 2018-10-09 15:20:20,551+0200 DEBUG
> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154
> ansible-playbook: out_path: /tmp/tmpBI6O9s
> 2018-10-09 15:20:20,551+0200 DEBUG
> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155
> ansible-playbook: vars_path: /tmp/tmpnaI1ZJ
> 2018-10-09 15:20:20,551+0200 DEBUG
> otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156
> ansible-playbook: env: {'HE_ANSIBLE_LOG_PATH':
> '/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-get_network_interfaces-20181009152020-xbd9mm.log',
> 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '172.16.51.44 53872
> 22', 'SELINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'root', 'USER': 'root',
> 'HOME': '/root', 'PATH':
> '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin',
> 'GUESTFISH_RESTORE': '\\e[0m', 'GUESTFISH_INIT': '\\e[1;34m', 'LANG':
> 'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'LANGUAGE': '',
> 'SHLVL': '2', 'PWD': '/root', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF':
> '/tmp/tmpBI6O9s', 'XMODIFIERS': '@im=none', 'XDG_RUNTIME_DIR':
> '/run/user/0', 'GUESTFISH_PS1': '\\[\\e[1;32m\\]>\\[\\e[0;31m\\] ',
> 'ANSIBLE_STDOUT_CALLBACK': '1_otopi_json', 'PYTHONPATH':
> '/usr/share/ovirt-hosted-engine-setup/scripts/..:',
> 'SELINUX_ROLE_REQUESTED': '', 'MAIL
>  ': '/var/spool/mail/root', 'ANSIBLE_CALLBACK_WHITELIST':
> '1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '2346', 'STY':
> '21717.pts-0.OB-PMO-VSR01', 'TERMCAP': 'SC|screen|VT 100/ANSI X3.64 virtual
> terminal:\\\n\t:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:\\\n\t:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:\\\n\t:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:\\\n\t:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:\\\n\t:li#61:co#106:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:\\\n\t:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:\\\n\t:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d@
> :ks=\\E[?1h\\E=:\\\n\t:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:\\\n\t:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:\\\n\t:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mr=\\E[7m:me=\\E[m:ms:\\\n\t:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:\\\n\t:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:\\\n\t:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--
>
>  
> ++,,hhII00:\\\n\t:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:\\\n\t:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\\n\t:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:\\\n\t:F3=\\E[1;2P:F4=\\E[1;2Q:F5=\\E[1;2R:F6=\\E[1;2S:\\\n\t:F7=\\E[15;2~:F8=\\E[17;2~:F9=\\E[18;2~:FA=\\E[19;2~:kb=\x7f:\\\n\t:K2=\\EOE:kB=\\E[Z:kF=\\E[1;2B:kR=\\E[1;2A:*4=\\E[3;2~:\\\n\t:*7=\\E[1;2F:#2=\\E[1;2H:#3=\\E[2;2~:#4=\\E[1;2D:%c=\\E[6;2~:\\\n\t:%e=\\E[5;2~:%i=\\E[1;2C:kh=\\E[1~:@1=\\E[1~:kH=\\E[4~:\\\n\t:@7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:\\\n\t:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:',
> 'LS_COLORS':
> 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:

[ovirt-users] Re: Diary of hosted engine install woes

2018-10-09 Thread Simone Tiraboschi
On Tue, Oct 9, 2018 at 4:54 PM  wrote:

> I'ved added a record to the DNS server here:
> ovirt-engine.example.com  10.0.0.109
>

OK, and how the engine VM will get that address?
Are you using DHCP? do you have a DHCP reservation for the MAC address you
are using on the engine VM?
Are you configuring it with a static IP?


>
> This IP address is on the physical network that the host is on (host is on
> 10.0.0.171).  I trust this is correct and I should not resolve to a natted
> IP instead.  I notice that regardless of this record, the name
> ovirt-engine.example.com resolves to a natted IP: 192.168.124.51 because
> the ansible script adds an entry to /etc/hosts:
> 192.168.124.51  ovirt-engine.example.com
> While the script is running, if I I can successfully ping
> ovirt-engine.example.com, it responds on 192.168.124.51.  So as you say:
> "host can correctly resolve the name of the engine VM", but it's not the
> DNS record's IP.  If I remove the DNS record and run hosted-engine
> --deploy, I get error:
> [ ERROR ] Host name is not valid: ovirt-engine.example.com did not
> resolve into an IP address
>
> Anyway, I added back the DNS record and ran hosted-engine --deploy
> command, it failed at:
> [ INFO  ] TASK [Clean /etc/hosts on the host]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: list object has no
> element 0\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 396, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\nchanged_when:
> True\n  - name: Clean /etc/hosts on the host\n^ here\n"}
>
> To debug, I added tasks to create_target_vm.yml that output the values of
> local_vm_ip.std_out_lines[0] and FQDN that are used in this task, then ran
> the usual deploy command again.  They are both localhost:
> [ INFO  ] TASK [show local_vm_ip.std_out_lines[0] that will be written to
> etc hosts]
> [ INFO  ] ok: [localhost]
> [ INFO  ] TASK [show FQDN]
> [ INFO  ] ok: [localhost]
>
> This time, it gets past [Clean /etc/hosts on the host], but hangs at [
> INFO  ] TASK [Check engine VM health] same as before.


This is fine, the bootstrap local VM runs over a natted network then, once
ready it will be shutdown and moved to the shared storage. At that point it
will be restarted on your management network.


>   I catted /etc/hosts while it was hanging and it contains:
> 127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6
> localhost6.localdomain6
>
> The ovirt-engine.example.com has been deleted!  I pinged
> ovirt-engine.example.com and it now resolves to its IP on the physical
> network: 10.0.0.109.  So I added back this /etc/hosts entry:
> 192.168.124.51  ovirt-engine.example.com


Please avoid this.


>
> It subsequently errored:
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.167559", "end": "2018-10-09 15:43:41.947274", "rc": 0, "start":
> "2018-10-09 15:43:41.779715", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=6810
> (Tue Oct  9 15:43:36
> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=6810 (Tue Oct  9
> 15:43:37
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\":
> \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\",
> \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\":
> false, \"crc32\": \"c5d76f8b\", \"local_conf_timestamp\": 6810,
> \"host-ts\": 6810}, \"global_maintenance\": false}", "stdout_lines":
> ["{\"1\": {\"conf_
>  on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=6810
> (Tue Oct  9 15:43:36
> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=6810 (Tue Oct  9
> 15:43:37
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\":
> \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\",
> \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\":
> false, \"crc32\": \"c5d76f8b\", \"local_conf_timestamp\": 6810,
> \"host-ts\": 6810}, \"global_maintenance\": false}"]}
>
> How can I check the hosted-engine's IP address to ensure name resolution
> is correct?
>

You can connect to that VM with VNC and check the IP there.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.

[ovirt-users] Re: Moving to new storage questions

2018-10-09 Thread Mark Steele
Eyal,

Thank you very much - that worked.

Next task - delete disk snapshots - unfortunately a couple of them require
the VM to be shutdown first.

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue


On Tue, Oct 9, 2018 at 7:29 AM Eyal Shenitzky  wrote:

> Hey Mark,
>
> Yes, after the disks copied successfully you can remove the original disks.
>
>
>
>
> On Tue, Oct 9, 2018 at 1:49 PM Mark Steele  wrote:
>
>> Good morning,
>>
>> We are in the process of moving our oVirt installation to a new storage
>> solution. We have been 'move'-ing VM disks to the new storage without
>> issue. We have several templates that use the old storage device - is there
>> an equivalent process for moving template disks to the new storage unit?
>>
>> I see there is a 'copy' function in the template disk screen which
>> creates a disk in the new storage domain. Can I then remove the original
>> disk image from the original domain?
>>
>> Best regards,
>>
>> Mark
>>
>> ***
>> *Mark Steele*
>> CIO / VP Technical Operations | TelVue Corporation
>> TelVue - We Share Your Vision
>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
>> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
>> twitter: http://twitter.com/telvue | facebook:
>> https://www.facebook.com/telvue
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3M5TF4LIEPFXRSRL7SG7PBGT5YRZORQS/
>>
>
>
> --
> Regards,
> Eyal Shenitzky
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LWIUNGM7Y54NF6CU4ZIDGYUDICN5TSO/


[ovirt-users] Re: Out-of-sync networks can only be detached

2018-10-09 Thread Dominik Holler
On Tue, 9 Oct 2018 13:24:51 +0200
Sakhi Hadebe  wrote:

> Hi,
> 
> I have a 3-node oVirt cluster. I have configured 2 logical networks:
> ovirtmgmt and public. Public logical network is attached in only 2 nodes
> and failing to attach on the 3rd node with the below error
> Invalid operation, out-of-sync network 'public' can only be detached.
> 
> Please  have been stuck on this for almost the whole day now. How do I fix
> this error?
> 

The error message in the UI might be include the wrong network name.
I guess the network ovirtmgmt is out-of-sync on the 3rd node.
Why the network is out of sync, is shown as a tooltip if you hover the
mouse pointer over ovirtmgmt in
"Compute > Hosts > 3rd host > Network Interfaces > Setup Host Networks"
If the shown information does not help you, please share a screenshot
of this dialog.

If there is a line like:
The following Network definitions on the Network Interface are different than 
those on the Logical Network. Please synchronize the Network Interface before 
editing network ${NETWORK_NOT_IN_SYNC}. The non-synchronized values are\: 
${OUT_OF_SYNC_VALUES}.
in engine.log, please share this line, too.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V4HDJJQIOJBD2BT2JZKZXWRPMW6EGNT5/


[ovirt-users] Re: Diary of hosted engine install woes

2018-10-09 Thread me
I'ved added a record to the DNS server here:
ovirt-engine.example.com  10.0.0.109

This IP address is on the physical network that the host is on (host is on 
10.0.0.171).  I trust this is correct and I should not resolve to a natted IP 
instead.  I notice that regardless of this record, the name 
ovirt-engine.example.com resolves to a natted IP: 192.168.124.51 because the 
ansible script adds an entry to /etc/hosts:
192.168.124.51  ovirt-engine.example.com
While the script is running, if I I can successfully ping 
ovirt-engine.example.com, it responds on 192.168.124.51.  So as you say: "host 
can correctly resolve the name of the engine VM", but it's not the DNS record's 
IP.  If I remove the DNS record and run hosted-engine --deploy, I get error:
[ ERROR ] Host name is not valid: ovirt-engine.example.com did not resolve into 
an IP address

Anyway, I added back the DNS record and ran hosted-engine --deploy command, it 
failed at:
[ INFO  ] TASK [Clean /etc/hosts on the host]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option 
with an undefined variable. The error was: list object has no element 0\n\nThe 
error appears to have been in 
'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 396, 
column 5, but may\nbe elsewhere in the file depending on the exact syntax 
problem.\n\nThe offending line appears to be:\n\nchanged_when: True\n  - 
name: Clean /etc/hosts on the host\n^ here\n"}

To debug, I added tasks to create_target_vm.yml that output the values of 
local_vm_ip.std_out_lines[0] and FQDN that are used in this task, then ran the 
usual deploy command again.  They are both localhost:
[ INFO  ] TASK [show local_vm_ip.std_out_lines[0] that will be written to etc 
hosts]
[ INFO  ] ok: [localhost]
[ INFO  ] TASK [show FQDN]
[ INFO  ] ok: [localhost]

This time, it gets past [Clean /etc/hosts on the host], but hangs at [ INFO  ] 
TASK [Check engine VM health] same as before.  I catted /etc/hosts while it was 
hanging and it contains:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

The ovirt-engine.example.com has been deleted!  I pinged 
ovirt-engine.example.com and it now resolves to its IP on the physical network: 
10.0.0.109.  So I added back this /etc/hosts entry:
192.168.124.51  ovirt-engine.example.com
It subsequently errored:
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed": true, 
"cmd": ["hosted-engine", "--vm-status", "--json"], "delta": "0:00:00.167559", 
"end": "2018-10-09 15:43:41.947274", "rc": 0, "start": "2018-10-09 
15:43:41.779715", "stderr": "", "stderr_lines": [], "stdout": "{\"1\": 
{\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=6810 (Tue 
Oct  9 15:43:36 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=6810 (Tue 
Oct  9 15:43:37 
2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
 \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\": 
\"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": 
\"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, 
\"crc32\": \"c5d76f8b\", \"local_conf_timestamp\": 6810, \"host-ts\": 6810}, 
\"global_maintenance\": false}", "stdout_lines": ["{\"1\": {\"conf_
 on_shared_storage\": true, \"live-data\": true, \"extra\": 
\"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=6810 (Tue 
Oct  9 15:43:36 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=6810 (Tue 
Oct  9 15:43:37 
2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
 \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\": 
\"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": 
\"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\": false, 
\"crc32\": \"c5d76f8b\", \"local_conf_timestamp\": 6810, \"host-ts\": 6810}, 
\"global_maintenance\": false}"]}

How can I check the hosted-engine's IP address to ensure name resolution is 
correct?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SVBXIBLS5TSP7SZROSSE6JD5ICBZLV3E/


[ovirt-users] Re: ovirt 4.2.6 and cockpit with plain hosts

2018-10-09 Thread Gianluca Cecchi
On Tue, Oct 9, 2018 at 4:10 PM Gianluca Cecchi 
wrote:

> Is cockpit supposed to run on plain CentOS hosts?
> I didn't have it installed: possibly because it started as a hypervisor
> host in version 3.x when cockpit not available.
> During updates it has not put in as a requirement package.
>
> I manually installed it and then enabled/started the cockpit.socket
> systemd unit.
> It seems started and I can correctly connect to https at tcp port 9090,
> but when doing a status of the cockpit.socket unit I see
> Oct 09 15:59:01 ov300 update-motd[22990]:
> /usr/share/cockpit/motd/update-motd: line 24: /run/cockpit/active.motd: No
> such file or directory
> Oct 09 15:59:01 ov300 ln[22998]: /bin/ln: failed to create symbolic link
> ‘/run/cockpit/motd’: No such file or directory
> Oct 09 15:59:01 ov300 systemd[1]: Listening on Cockpit Web Service Socket.
>
> Line 24 of /usr/share/cockpit/motd/update-motd :
> printf 'Web console: %s%s\n\n' "${hostname_url}" "${ip_url}"  >
> /run/cockpit/active.motd
>
> and on the host
> [root@ov300 ~]# ll -d /run/c*
> drwxr-x---. 2 chrony chrony 60 May 24 16:18 /run/chrony
> -rw-r--r--. 1 root   root5 May 24 16:18 /run/chronyd.pid
> drwxr-xr-x. 2 root   root   40 May 24 16:18 /run/console
> -rw-r--r--. 1 root   root5 May 24 16:18 /run/crond.pid
> --. 1 root   root0 May 24 16:18 /run/cron.reboot
> [root@ov300 ~]#
>
> Is this a bug? Can I manually create the " /run/cockpit" directory?
>
> Thanks,
> Gianluca
>

It seems that a reboot of the node is "sufficient" to solve the situation.
The cockpit.socket unit is active and the /run/cockpit directory has been
generated...

[root@ov300 ~]# uptime
 16:44:48 up 3 min,  1 user,  load average: 0.04, 0.08, 0.05
[root@ov300 ~]#

[root@ov300 ~]# ll -d /run/cockpit/
drwxr-xr-x. 2 root root 80 Oct  9 16:41 /run/cockpit/
[root@ov300 ~]#

 [root@ov300 ~]# ll /run/cockpit/
total 4
-rw-r--r--. 1 root root 84 Oct  9 16:42 active.motd
lrwxrwxrwx. 1 root root 11 Oct  9 16:41 motd -> active.motd
[root@ov300 ~]#

[root@ov300 ~]# systemctl status cockpit.socket
● cockpit.socket - Cockpit Web Service Socket
   Loaded: loaded (/usr/lib/systemd/system/cockpit.socket; enabled; vendor
preset: disabled)
   Active: active (listening) since Tue 2018-10-09 16:41:39 CEST; 1min 22s
ago
 Docs: man:cockpit-ws(8)
   Listen: [::]:9090 (Stream)
  Process: 995 ExecStartPost=/bin/ln -snf active.motd /run/cockpit/motd
(code=exited, status=0/SUCCESS)
  Process: 986 ExecStartPost=/usr/share/cockpit/motd/update-motd  localhost
(code=exited, status=0/SUCCESS)
Tasks: 0

Oct 09 16:41:39 ov300 systemd[1]: Starting Cockpit Web Service Socket.
Oct 09 16:41:39 ov300 systemd[1]: Listening on Cockpit Web Service Socket.
[root@ov300 ~]#

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNMFPO4IHE3CMY65XV6QMLWPJZCYB5T5/


[ovirt-users] Re: Network configuration for self-hosted engine deployement oVirt node 4.2

2018-10-09 Thread Arnaud DEBEC
I try to deploy the engine today and got the following error: "A Network 
interface is required" like in my previous test.

Node version: virt-node-ng-installer-ovirt-4.2-2018062610
1. Install the oVirt node with DNS configured
2. Configure bond0 (profile name: bond0, device: bond0) with eno1 and eno5 with 
Active Backup policy (to be the ovirtmgmt)
3. Configure bond1 (profile name: bond1, device: bond1) with eno2, eno3, eno6 
and eno7 with Round-Robin policy (iSCSI)

Do you have any idea? Here is the part of the log related to the error:

2018-10-09 15:20:20,545+0200 DEBUG otopi.context context.dumpEnvironment:859 
ENVIRONMENT DUMP - BEGIN
2018-10-09 15:20:20,545+0200 DEBUG otopi.context context.dumpEnvironment:869 
ENV OVEHOSTED_NETWORK/gateway=str:'172.16.51.1'
2018-10-09 15:20:20,546+0200 DEBUG otopi.context context.dumpEnvironment:869 
ENV QUESTION/1/OVEHOSTED_GATEWAY=str:'172.16.51.1'
2018-10-09 15:20:20,547+0200 DEBUG otopi.context context.dumpEnvironment:873 
ENVIRONMENT DUMP - END
2018-10-09 15:20:20,549+0200 DEBUG otopi.context context._executeMethod:128 
Stage customization METHOD 
otopi.plugins.gr_he_common.network.bridge.Plugin._customization
2018-10-09 15:20:20,551+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:153 
ansible-playbook: cmd: ['/bin/ansible-playbook', 
'--module-path=/usr/share/ovirt-hosted-engine-setup/ansible', 
'--inventory=localhost,', '--extra-vars=@/tmp/tmpnaI1ZJ', 
'/usr/share/ovirt-hosted-engine-setup/ansible/get_network_interfaces.yml']
2018-10-09 15:20:20,551+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:154 
ansible-playbook: out_path: /tmp/tmpBI6O9s
2018-10-09 15:20:20,551+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155 
ansible-playbook: vars_path: /tmp/tmpnaI1ZJ
2018-10-09 15:20:20,551+0200 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:156 
ansible-playbook: env: {'HE_ANSIBLE_LOG_PATH': 
'/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-get_network_interfaces-20181009152020-xbd9mm.log',
 'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '172.16.51.44 53872 
22', 'SELINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'root', 'USER': 'root', 
'HOME': '/root', 'PATH': 
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin', 
'GUESTFISH_RESTORE': '\\e[0m', 'GUESTFISH_INIT': '\\e[1;34m', 'LANG': 
'en_US.UTF-8', 'TERM': 'screen', 'SHELL': '/bin/bash', 'LANGUAGE': '', 'SHLVL': 
'2', 'PWD': '/root', 'HISTSIZE': '1000', 'OTOPI_CALLBACK_OF': '/tmp/tmpBI6O9s', 
'XMODIFIERS': '@im=none', 'XDG_RUNTIME_DIR': '/run/user/0', 'GUESTFISH_PS1': 
'\\[\\e[1;32m\\]>\\[\\e[0;31m\\] ', 'ANSIBLE_STDOUT_CALLBACK': 
'1_otopi_json', 'PYTHONPATH': 
'/usr/share/ovirt-hosted-engine-setup/scripts/..:', 'SELINUX_ROLE_REQUESTED': 
'', 'MAIL
 ': '/var/spool/mail/root', 'ANSIBLE_CALLBACK_WHITELIST': 
'1_otopi_json,2_ovirt_logger', 'XDG_SESSION_ID': '2346', 'STY': 
'21717.pts-0.OB-PMO-VSR01', 'TERMCAP': 'SC|screen|VT 100/ANSI X3.64 virtual 
terminal:\\\n\t:DO=\\E[%dB:LE=\\E[%dD:RI=\\E[%dC:UP=\\E[%dA:bs:bt=\\E[Z:\\\n\t:cd=\\E[J:ce=\\E[K:cl=\\E[H\\E[J:cm=\\E[%i%d;%dH:ct=\\E[3g:\\\n\t:do=^J:nd=\\E[C:pt:rc=\\E8:rs=\\Ec:sc=\\E7:st=\\EH:up=\\EM:\\\n\t:le=^H:bl=^G:cr=^M:it#8:ho=\\E[H:nw=\\EE:ta=^I:is=\\E)0:\\\n\t:li#61:co#106:am:xn:xv:LP:sr=\\EM:al=\\E[L:AL=\\E[%dL:\\\n\t:cs=\\E[%i%d;%dr:dl=\\E[M:DL=\\E[%dM:dc=\\E[P:DC=\\E[%dP:\\\n\t:im=\\E[4h:ei=\\E[4l:mi:IC=\\E[%d@:ks=\\E[?1h\\E=:\\\n\t:ke=\\E[?1l\\E>:vi=\\E[?25l:ve=\\E[34h\\E[?25h:vs=\\E[34l:\\\n\t:ti=\\E[?1049h:te=\\E[?1049l:us=\\E[4m:ue=\\E[24m:so=\\E[3m:\\\n\t:se=\\E[23m:mb=\\E[5m:md=\\E[1m:mr=\\E[7m:me=\\E[m:ms:\\\n\t:Co#8:pa#64:AF=\\E[3%dm:AB=\\E[4%dm:op=\\E[39;49m:AX:\\\n\t:vb=\\Eg:G0:as=\\E(0:ae=\\E(B:\\\n\t:ac=\\140\\140aaffggjjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~..--
 
++,,hhII00:\\\n\t:po=\\E[5i:pf=\\E[4i:Km=\\E[M:k0=\\E[10~:k1=\\EOP:k2=\\EOQ:\\\n\t:k3=\\EOR:k4=\\EOS:k5=\\E[15~:k6=\\E[17~:k7=\\E[18~:\\\n\t:k8=\\E[19~:k9=\\E[20~:k;=\\E[21~:F1=\\E[23~:F2=\\E[24~:\\\n\t:F3=\\E[1;2P:F4=\\E[1;2Q:F5=\\E[1;2R:F6=\\E[1;2S:\\\n\t:F7=\\E[15;2~:F8=\\E[17;2~:F9=\\E[18;2~:FA=\\E[19;2~:kb=\x7f:\\\n\t:K2=\\EOE:kB=\\E[Z:kF=\\E[1;2B:kR=\\E[1;2A:*4=\\E[3;2~:\\\n\t:*7=\\E[1;2F:#2=\\E[1;2H:#3=\\E[2;2~:#4=\\E[1;2D:%c=\\E[6;2~:\\\n\t:%e=\\E[5;2~:%i=\\E[1;2C:kh=\\E[1~:@1=\\E[1~:kH=\\E[4~:\\\n\t:@7=\\E[4~:kN=\\E[6~:kP=\\E[5~:kI=\\E[2~:kD=\\E[3~:ku=\\EOA:\\\n\t:kd=\\EOB:kr=\\EOC:kl=\\EOD:km:',
 'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:
 
*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01

[ovirt-users] Disk move fails - Invalid parameter: 'initial size=

2018-10-09 Thread Simon Vincent
I am trying to move a disk to another data domain but it always fails with
the following error.

ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-77) [] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM blade15.crt.lan command
HSMGetAllTasksStatusesVDS failed: Error creating a new volume: (u"Volume
creation e6171aae-2c5b-4c91-84fc-506c0e835928 failed: Invalid parameter:
'initial size=122016117'",)

It sounds a bit like this bug
https://bugzilla.redhat.com/show_bug.cgi?id=1625240

Does anyone know how to work around this problem?

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZQY7JRIAU6IKH6IOIQFCMCIGKPGBTP3L/


[ovirt-users] ovirt 4.2.6 and cockpit with plain hosts

2018-10-09 Thread Gianluca Cecchi
Is cockpit supposed to run on plain CentOS hosts?
I didn't have it installed: possibly because it started as a hypervisor
host in version 3.x when cockpit not available.
During updates it has not put in as a requirement package.

I manually installed it and then enabled/started the cockpit.socket systemd
unit.
It seems started and I can correctly connect to https at tcp port 9090, but
when doing a status of the cockpit.socket unit I see
Oct 09 15:59:01 ov300 update-motd[22990]:
/usr/share/cockpit/motd/update-motd: line 24: /run/cockpit/active.motd: No
such file or directory
Oct 09 15:59:01 ov300 ln[22998]: /bin/ln: failed to create symbolic link
‘/run/cockpit/motd’: No such file or directory
Oct 09 15:59:01 ov300 systemd[1]: Listening on Cockpit Web Service Socket.

Line 24 of /usr/share/cockpit/motd/update-motd :
printf 'Web console: %s%s\n\n' "${hostname_url}" "${ip_url}"  >
/run/cockpit/active.motd

and on the host
[root@ov300 ~]# ll -d /run/c*
drwxr-x---. 2 chrony chrony 60 May 24 16:18 /run/chrony
-rw-r--r--. 1 root   root5 May 24 16:18 /run/chronyd.pid
drwxr-xr-x. 2 root   root   40 May 24 16:18 /run/console
-rw-r--r--. 1 root   root5 May 24 16:18 /run/crond.pid
--. 1 root   root0 May 24 16:18 /run/cron.reboot
[root@ov300 ~]#

Is this a bug? Can I manually create the " /run/cockpit" directory?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JSI6WZ62EPNAM45FVI7IUWO7556EG2NF/


[ovirt-users] Re: removal of 3 vdsm hooks

2018-10-09 Thread Gianluca Cecchi
On Thu, Oct 4, 2018 at 12:37 PM Dan Kenigsberg  wrote:

> I've identified 3 ancient vdsm-hooks that have been obsoleted by
> proper oVirt features.
>
> vdsm-hook-isolatedvlan: obsoleted in ovirt-4.2.6  by
> clean-traffic-gateway filter
> https://gerrit.ovirt.org/#/q/I396243e1943eca245ab4da64bb286da19f9b47ec
>
>
>
Hello Dan,
you mention "clean-traffic-gateway" filter and "clean-traffic" filter.
They are to be considered different, correct?
Because both in my upstream oVirt engine 4.2.6.4-1.el7 and in my RHV engine
4.2.6.4-0.1.el7ev I only see the "clean-traffic" one.
Perhaps it is "filtered out" because the hosts are not 7.6, as I see from
https://bugzilla.redhat.com/show_bug.cgi?id=1603115 that it is a libvirt
feature to be backported to 7.6, so not available in 7.5 perhaps...?

If so, as RH EL 7.6 is not out yet, and then it will pass some weeks/months
at least before CentOS 7.6 will be released and guys updating the
hypervisors, I think it's not time yet to remove the old
vdsm-hook-isolatedvlan

Any clarification if I was wrong?
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ATAY2JPSIDKTYCYPC2AGZSGJKUCDYGO7/


[ovirt-users] Re: Migrating to Gluster HCI - Export/Import

2018-10-09 Thread Oliver Riesener
Hi Vincent,

> Am 09.10.2018 um 10:11 schrieb Vincent Royer :
> 
> I'm upgrading from 2 host w/ attached storage to 3-host HCI w/Gluster.  What 
> is the procedure for this?  Is export domain depreciated?
> 
> 1. Shut down and export all VMs -

Yes 

> Do I use export domain?  Or is the "Backup Domain" the newer method?

* Export Domain held your VM disks **and**  the Client CONFIGURATION.
 - It’s a save place, you can have more then one export device,  like  a 
external USB 3.0 disk. Not simultaneously but successively.
- Avoid saving snapshots and it’s nearly failsafe.

* Export to OVA should work similar, with free storage location of your own.
  - I didn't test it.


* A Data Domain hold you client disk data **only**.
  - It can be detached. So it’s inactive on hosts but still registered in the 
database.
  - In this case you have to transfer your database with configurations to the 
new setup.
  - Then you are able to import your data domain later.

* Backup Domain 
  - send references please ...

> 2. Detach domains from cluster and wipe hosts

Yes

> 
> 3. Build HCI cluster, setup gluster bricks/vols

Did you have enough networks ports for:
* ovirtmgmt 1G
* IPMI 1G

* Gluster Sync 10G
* VM Migration 10G

* iSCSI SAN 10G
* Display Network 

> 4. Attach backup domain and import VMs
> 

VM’s only from export
- or -
Config and Disks from engine backup and data domains

> Thanks!
> 
> Vincent
> 
Sheers
Oliver
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NWC77GLRUJCEVNFJJWNKZC42BDTDRDI/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/POITIJQR57465MCWHBGKP545XYGH5A6C/


[ovirt-users] Re: Adding a new Host to Cluster via oVirt Manager fails with "No Route to Host"

2018-10-09 Thread Markus Frei
Additionally here is the corresponding snippet from the engine.log:

https://paste.simplylinux.ch/view/c62f0f7d
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3W4AIGPCRNEUU7P34XAO7UXXAQ56TTFI/


[ovirt-users] Re: Adding a new Host to Cluster via oVirt Manager fails with "No Route to Host"

2018-10-09 Thread Markus Frei
Here are the requested logs.

mom.log:
https://paste.simplylinux.ch/view/a873a0bd

supervdsm.log:
https://paste.simplylinux.ch/view/dc8bd4c9

vdsm.log:
https://paste.simplylinux.ch/view/5e38d65d

I hope you can help!
Thank you very much in advance again!

Kind regards,
Chris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDOKA6IDM6D23YC2OIN6Q5WUBAH24CYF/


[ovirt-users] Re: oVirt guest tools

2018-10-09 Thread Sandro Bonazzola
Il giorno mar 9 ott 2018 alle ore 13:27 Alex K  ha
scritto:

> Hi all,
>
> I am running ovirt 4.2 with hosts based on CentOS 7.
> I see that the latest guest tools are the following:
>
>
> https://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-3.el7/oVirt-toolsSetup-4.2-3.el7.iso
>
>
> https://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/oVirt-toolsSetup-4.2-1.el7.centos.iso
>
> 4.2-3.el7 seems to have newer version than 4.2-1.el7.centos.
> Can I use 4.2-3.el7 ISO?
>

Yes, you can



>
> Thanx,
> Alex
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUYMBHXI2CFTYUX5BJYIHWCGLWF2XICY/
>


-- 

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5ZSXM5QYICBYYFYFJ3DO5QQ63BIRG23/


[ovirt-users] Re: Moving to new storage questions

2018-10-09 Thread Eyal Shenitzky
Hey Mark,

Yes, after the disks copied successfully you can remove the original disks.




On Tue, Oct 9, 2018 at 1:49 PM Mark Steele  wrote:

> Good morning,
>
> We are in the process of moving our oVirt installation to a new storage
> solution. We have been 'move'-ing VM disks to the new storage without
> issue. We have several templates that use the old storage device - is there
> an equivalent process for moving template disks to the new storage unit?
>
> I see there is a 'copy' function in the template disk screen which creates
> a disk in the new storage domain. Can I then remove the original disk image
> from the original domain?
>
> Best regards,
>
> Mark
>
> ***
> *Mark Steele*
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
> 800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
> twitter: http://twitter.com/telvue | facebook:
> https://www.facebook.com/telvue
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3M5TF4LIEPFXRSRL7SG7PBGT5YRZORQS/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEHYYIIC44T5X7YJ45O6B2CTNOMXU7SU/


[ovirt-users] oVirt guest tools

2018-10-09 Thread Alex K
Hi all,

I am running ovirt 4.2 with hosts based on CentOS 7.
I see that the latest guest tools are the following:

https://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-3.el7/oVirt-toolsSetup-4.2-3.el7.iso

https://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/oVirt-toolsSetup-4.2-1.el7.centos.iso

4.2-3.el7 seems to have newer version than 4.2-1.el7.centos.
Can I use 4.2-3.el7 ISO?

Thanx,
Alex
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AUYMBHXI2CFTYUX5BJYIHWCGLWF2XICY/


[ovirt-users] Out-of-sync networks can only be detached

2018-10-09 Thread Sakhi Hadebe
Hi,

I have a 3-node oVirt cluster. I have configured 2 logical networks:
ovirtmgmt and public. Public logical network is attached in only 2 nodes
and failing to attach on the 3rd node with the below error
Invalid operation, out-of-sync network 'public' can only be detached.

Please  have been stuck on this for almost the whole day now. How do I fix
this error?

-- 
Regards,
Sakhi Hadebe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PP2NFQXYOVRQG7WMDTP2NK4FSWPQQCOQ/


[ovirt-users] Moving to new storage questions

2018-10-09 Thread Mark Steele
Good morning,

We are in the process of moving our oVirt installation to a new storage
solution. We have been 'move'-ing VM disks to the new storage without
issue. We have several templates that use the old storage device - is there
an equivalent process for moving template disks to the new storage unit?

I see there is a 'copy' function in the template disk screen which creates
a disk in the new storage domain. Can I then remove the original disk image
from the original domain?

Best regards,

Mark

***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | mste...@telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3M5TF4LIEPFXRSRL7SG7PBGT5YRZORQS/


[ovirt-users] Re: Storage advice

2018-10-09 Thread Brain Recursion
Also is it best to run the hosted engine on a dedicated server or as a self
hosted VM?

Thanks

On Fri, 5 Oct 2018 at 10:57, Brain Recursion 
wrote:

> I have a small oVirt cluster running but I have been having problems with
> the storage and would like to completely start again with the storage
> infrastructure. Currently for storage I have a single server running
> windows storage server serving oVirt via iSCSI.  I also have a smaller
> storage server which is currently not used. The oVirt cluster is not
> running a production environment but ideally I do not want to have to power
> it all off to patch the storage servers and oVirt cluster.
>
> 1x storage server, raid 10 24TB usable, 2x 10Gb ethernet
> 1x storage server, raid 10 4TB usable, 4x 1Gb ethernet
> 8x oVirt hosts, 1x10Gb ethernet on each
> 1x 24port 10Gb switch
>
> What would be the best way to utilise the storage servers?
> I was thinking about sticking CentOS on both servers and running Gluster
> with a 4TB replicated volume across both servers for the hosted engine and
> other critical VMs and then a 20TB non-replicated Gluster volume running on
> just the larger storage server for non critical VMs. I have another spare
> server which i could potentially use as a arbiter node. Would this work or
> would I have huge problems as the hardware performance of each storage
> server is so different?
>
> Any advice appreciated.
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MP54SL5SJ7C4G3OVNMPEQRYA3FSAMSHD/


[ovirt-users] Re: Adding a new Host to Cluster via oVirt Manager fails with "No Route to Host"

2018-10-09 Thread Miguel Duarte de Mora Barroso
After a while I managed to access that.

I asked for it, since you said the 'installation process failed'.

Please also get us the vdsm.log and supervdsm.log of the failed host.

On Mon, Oct 8, 2018 at 12:46 PM, Markus Frei  wrote:
> Hi Miguel
>
> Thanks for your reply.
> I hope this is how you wanted it.
> It`s the ovirt-host-deploy log, but it doesn`t contain any errors.
>
> https://paste.simplylinux.ch/view/a16a3a99
>
> Kind regards,
> Chris
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PV7VMUFL5GPFED5KB2GB5PZNXVFGYQ4Y/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B522YABRCANOXNXJUYZULTSHEC3Z4UXX/


[ovirt-users] ovirt - in docker

2018-10-09 Thread ReSearchIT Eng
Hello!
I am interested to run ovirt in docker container.
It was noticed that there is an official repo for it:
https://github.com/oVirt/ovirt-container-engine
Unfortunately it did not get an update for 2 years (4.1).

Can anyone help with the required answers/entrypoint/patch files for
the new 4.2 ?

Thanks!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C66CSW7CY7RCTC56V5YNSZ6KQKHLADIS/


[ovirt-users] Re: ovirt-guest-agent running on Debian vm, but data doesn't show in web-gui

2018-10-09 Thread Oliver Riesener
Up and running here: Debian 8 / 9

See 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B4WKYJKYR323EHT3ZNFZ3QBMYOCIYSNN/

Put download overt-agent into right place for the script. (lib/)

> Am 09.10.2018 um 08:55 schrieb Arild Ringøy :
> 
> 
> Ok. I wasn't sure if I should report a bug already reported. Great if you can 
> come up with something. Highly appreciated.
> 
> Regards
> Arild
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YXBHOCHP5HN3X7NYMG5H6E3E32VDQFU/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UTQWVO7SM2WC5ZBLG4ZKIMCCQBFPHHGK/


[ovirt-users] Migrating to Gluster HCI - Export/Import

2018-10-09 Thread Vincent Royer
I'm upgrading from 2 host w/ attached storage to 3-host HCI w/Gluster.
What is the procedure for this?  Is export domain depreciated?

1. Shut down and export all VMs - Do I use export domain?  Or is the
"Backup Domain" the newer method?

2. Detach domains from cluster and wipe hosts

3. Build HCI cluster, setup gluster bricks/vols

4. Attach backup domain and import VMs

Thanks!

Vincent
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7NWC77GLRUJCEVNFJJWNKZC42BDTDRDI/


[ovirt-users] Re: Diary of hosted engine install woes

2018-10-09 Thread Simone Tiraboschi
On Tue, Oct 9, 2018 at 1:21 AM  wrote:

> Okay, I went back to using a bond (instead of an individual NIC).  Above
> network problem is fixed and now proceeds as far as ever.  Hangs for around
> 10 minutes at:
>
> [ INFO  ] TASK [Check engine VM health]
> The hosted-engine-setup-ansible-create_target_vm log has:
> 2018-10-08 23:42:01,664+0100 INFO ansible task start {'status': 'OK',
> 'ansible_task': u'Check engine VM health', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml',
> 'ansible_type': 'task'}
>
> Then repeats the following line for around 10 minutes:
> 2018-10-08 23:42:01,866+0100 DEBUG ansible on_any args
>  kwargs
>
> Before eventually, the console outputs the following error:
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.167677", "end": "2018-10-08 23:53:11.112436", "rc": 0, "start":
> "2018-10-08 23:53:10.944759", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=49491
> (Mon Oct  8 23:53:03
> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=49491 (Mon Oct  8
> 23:53:03
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\":
> \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\",
> \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\":
> false, \"crc32\": \"75452be7\", \"local_conf_timestamp\": 49491,
> \"host-ts\": 49491}, \"global_maintenance\": false}", "stdout_lines":
> ["{\"1\": {\"c
>  onf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=49491
> (Mon Oct  8 23:53:03
> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=49491 (Mon Oct  8
> 23:53:03
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\":
> \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\",
> \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\":
> false, \"crc32\": \"75452be7\", \"local_conf_timestamp\": 49491,
> \"host-ts\": 49491}, \"global_maintenance\": false}"]}
> [ INFO  ] TASK [Check VM status at virt level]
>
> The hosted-engine-setup-ansible-create_target_vm log shows the following
> when this error occurs:
>
> 2018-10-08 23:53:11,812+0100 DEBUG var changed: host "localhost" var
> "ansible_failed_result" type "" value: "{
> "_ansible_no_log": false,
> "_ansible_parsed": true,
> "attempts": 120,
> "changed": true,
> "cmd": [
> "hosted-engine",
> "--vm-status",
> "--json"
> ],
> "delta": "0:00:00.167677",
> "end": "2018-10-08 23:53:11.112436",
> "failed": true,
> "invocation": {
> "module_args": {
> "_raw_params": "hosted-engine --vm-status --json",
> "_uses_shell": false,
> "argv": null,
> "chdir": null,
> "creates": null,
> "executable": null,
> "removes": null,
> "stdin": null,
> "warn": true
> }
> },
> "rc": 0,
> "start": "2018-10-08 23:53:10.944759",
> "stderr": "",
> "stderr_lines": [],
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=49491
> (Mon Oct  8 23:53:03
> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=49491 (Mon Oct  8
> 23:53:03
> 2018)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"host\", \"host-id\": 1, \"engine-status\": {\"reason\":
> \"failed liveliness check\", \"health\": \"bad\", \"vm\": \"up\",
> \"detail\": \"Up\"}, \"score\": 3400, \"stopped\": false, \"maintenance\":
> false, \"crc32\": \"75452be7\", \"local_conf_timestamp\": 49491,
> \"host-ts\": 49491}, \"global_maintenance\": false}",
>

This is usually a name resolution issue:
  "vm": "up" - this is checked at virt level
  "reason": "failed liveliness check", "health": "bad" - this is checked
from the host over http

I'd suggest to double check that the host can correctly resolve the name of
the engine VM and that the engine VM correctly got the address where its
FQDN resolves to.
Do you have a properly working DHCP with a DHCP reservation for the engine
VM? did you set a static IP on the engine VM?


> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=49491
> (Mon Oct  8 23:53:03
> 2018)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=49491 (Mo