[ovirt-users] oVirt keeps crashing

2022-06-08 Thread Reznikov Alexei

Hello Paul.

I have same issue with "ovirt-engine: ERROR run:542 Error: process 
terminated with status code 1".

How you resolved it?

--

AlexR
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M4IQAGOCQYG6K3J5WFIQRIWONIX5JAE5/


[ovirt-users] Re: Broker fails to start after upgrade 4.1 to 4.2 metadata_image_UUID can't be ''

2018-06-25 Thread Reznikov Alexei

25.06.2018 15:12, Martin Sivak пишет:

Hi,

yes there is a solution described directly in the bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1521011#c20

The provided script worked only for cases that had the necessary
disks, but where the uuids were not written to the config files.

You need to follow the procedure from comment 20 when no disks for
lockspace and metadata exist at all.

Best regards

Martin Sivak



On Mon, Jun 25, 2018 at 9:52 AM, Reznikov Alexei  wrote:

21.06.2018 20:15, reznikov...@soskol.com пишет:

Hi list!

After upgrade my cluster from 4.1.9 to 4.2.2, agent and broker can't start
on host...

cat /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::ERROR::2018-06-21
03:25:34,603::hosted_engine::538::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Failed to start necessary monitors
MainThread::ERROR::2018-06-21
03:25:34,604::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Traceback (most recent call last)

cat /var/log/ovirt-hosted-engine-ha/broker.log
MainThread::INFO::2018-06-21
03:25:40,406::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)
Finished loading submonitors
MainThread::WARNING::2018-06-21
03:25:40,406::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
Can't connect vdsm storage: 'metadata_image_UUID can't be ''

cat /etc/ovirt-hosted-engine/hosted-engine.conf | grep metadata_image
metadata_image_UUID=

Also is:
cat /etc/ovirt-hosted-engine/hosted-engine.conf | grep lock
lockspace_image_UUID=
lockspace_volume_UUID=

This bug is very much like this
https://bugzilla.redhat.com/show_bug.cgi?id=1521011 My cluster started with
version 3.3...

But i can't resolution this bug correctly.

Guru please help me!!!

Thanx, Alex!


Thanks for answer Martin.

I get the problem in step 5, the procedure described in comment 20
[root @ h4 /] # sanlock direct init -s hosted-engine: 0: 
/rhev/data-center/mnt/ssd.lan \: _ ovirt / 
8905c9ac-d892-478d-8346-63b8fa1c5763 / images / badd5883-ef71- 
45bb-9073-a573f46a3b44 / e4408917-fe00-4567-8db0-bf464472ec01.lockspace

init done -19

What does mean "init done -19", why do not I see any changes?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KPMQ5KY7V2KK6WXHAR6SKFEG4GZ4D57/


[ovirt-users] Re: Broker fails to start after upgrade 4.1 to 4.2 metadata_image_UUID can't be ''

2018-06-25 Thread Reznikov Alexei

25.06.2018 13:05, Simone Tiraboschi пишет:



On Mon, Jun 25, 2018 at 10:19 AM Reznikov Alexei 
mailto:reznikov...@soskol.com>> wrote:


21.06.2018 20:15, reznikov...@soskol.com
<mailto:reznikov...@soskol.com> пишет:
> Hi list!
>
> After upgrade my cluster from 4.1.9 to 4.2.2, agent and broker
can't
> start on host...
>
> cat /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::ERROR::2018-06-21
>

03:25:34,603::hosted_engine::538::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)

> Failed to start necessary monitors
> MainThread::ERROR::2018-06-21
>

03:25:34,604::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)

> Traceback (most recent call last)
>
> cat /var/log/ovirt-hosted-engine-ha/broker.log
> MainThread::INFO::2018-06-21
>

03:25:40,406::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors)

> Finished loading submonitors
> MainThread::WARNING::2018-06-21
>

03:25:40,406::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)

> Can't connect vdsm storage: 'metadata_image_UUID can't be ''
>
> cat /etc/ovirt-hosted-engine/hosted-engine.conf | grep
metadata_image
> metadata_image_UUID=
>
> Also is:
> cat /etc/ovirt-hosted-engine/hosted-engine.conf | grep lock
> lockspace_image_UUID=
> lockspace_volume_UUID=
>
> This bug is very much like this
> https://bugzilla.redhat.com/show_bug.cgi?id=1521011 My cluster
started
> with version 3.3...
>
> But i can't resolution this bug correctly.
>
> Guru please help me!!!
>
> Thanx, Alex!
>
Bump.

I tried run workaround script from Simone Tiraboschi, but him not
work
properly for me.

I not see volume ... hosted-engine.lockspace and
hosted-engine.metada is
null.

[root@h4 ~]# ./workaround_1521011.sh
+ source /etc/ovirt-hosted-engine/hosted-engine.conf
++ fqdn=eng.lan
++ vm_disk_id=e9d7a377-e109-4b28-9a43-7a8c8b603749
++ vm_disk_vol_id=cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b
++ vmid=ccdd675a-a58b-495a-9502-3e6a4b7e5228
++ storage=ssd:/ovirt
++ mnt_options=
++ conf=/var/run/ovirt-hosted-engine-ha/vm.conf
++ host_id=4
++ console=vnc
++ domainType=nfs3
++ spUUID=----
++ sdUUID=8905c9ac-d892-478d-8346-63b8fa1c5763
++ connectionUUID=ce84071b-86a2-4e82-b4d9-06abf23dfbc4
++ ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
++ ca_subject='C=EN, L=Test, O=Test, CN=Test'
++ vdsm_use_ssl=true
++ gateway=10.245.183.1
++ bridge=ovirtmgmt
++ lockspace_volume_UUID=
++ lockspace_image_UUID=
++ metadata_volume_UUID=
++ metadata_image_UUID=
++ conf_volume_UUID=a20d9700-1b9a-41d8-bb4b-f2b7c168104f
++ conf_image_UUID=b5f353f5-9357-4aad-b1a3-751d411e6278
++ iqn=
++ portal=
++ user=
++ password=
++ port=
++ vdsm-client StorageDomain getImages
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763
storagepoolID=----
++ grep -
++ tr -d ,
++ xargs
+ for i in '$(vdsm-client StorageDomain getImages
storagedomainID=${sdUUID} storagepoolID=${spUUID} | grep - | tr -d
'\'','\'' | xargs)'
++ vdsm-client StorageDomain getVolumes
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763
storagepoolID=----
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994
++ grep -
++ tr -d ,
++ xargs
+ for v in '$(vdsm-client StorageDomain getVolumes
storagedomainID=${sdUUID} storagepoolID=${spUUID} imageID=${i} |
grep -
| tr -d '\'','\'' | xargs)'
++ vdsm-client Volume getInfo
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763
storagepoolID=----
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55
++ jq '. | select(.description=="hosted-engine.lockspace") | .uuid'
++ xargs
+ lockspace_vol_uuid=
+ [[ ! -z '' ]]
++ vdsm-client Volume getInfo
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763
storagepoolID=----
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55
++ jq '. | select(.description=="hosted-engine.lockspace") | .image'
++ xargs
+ lockspace_img_uuid=
+ [[ ! -z '' ]]
++ vdsm-client Volume getInfo
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763
storagepoolID=----
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55
++ jq '. 

[ovirt-users] Re: Broker fails to start after upgrade 4.1 to 4.2 metadata_image_UUID can't be ''

2018-06-25 Thread Reznikov Alexei

21.06.2018 20:15, reznikov...@soskol.com пишет:

Hi list!

After upgrade my cluster from 4.1.9 to 4.2.2, agent and broker can't 
start on host...


cat /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::ERROR::2018-06-21 
03:25:34,603::hosted_engine::538::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) 
Failed to start necessary monitors
MainThread::ERROR::2018-06-21 
03:25:34,604::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) 
Traceback (most recent call last)


cat /var/log/ovirt-hosted-engine-ha/broker.log
MainThread::INFO::2018-06-21 
03:25:40,406::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) 
Finished loading submonitors
MainThread::WARNING::2018-06-21 
03:25:40,406::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) 
Can't connect vdsm storage: 'metadata_image_UUID can't be ''


cat /etc/ovirt-hosted-engine/hosted-engine.conf | grep metadata_image
metadata_image_UUID=

Also is:
cat /etc/ovirt-hosted-engine/hosted-engine.conf | grep lock
lockspace_image_UUID=
lockspace_volume_UUID=

This bug is very much like this 
https://bugzilla.redhat.com/show_bug.cgi?id=1521011 My cluster started 
with version 3.3...


But i can't resolution this bug correctly.

Guru please help me!!!

Thanx, Alex!


Bump.

I tried run workaround script from Simone Tiraboschi, but him not work 
properly for me.


I not see volume ... hosted-engine.lockspace and hosted-engine.metada is 
null.


[root@h4 ~]# ./workaround_1521011.sh
+ source /etc/ovirt-hosted-engine/hosted-engine.conf
++ fqdn=eng.lan
++ vm_disk_id=e9d7a377-e109-4b28-9a43-7a8c8b603749
++ vm_disk_vol_id=cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b
++ vmid=ccdd675a-a58b-495a-9502-3e6a4b7e5228
++ storage=ssd:/ovirt
++ mnt_options=
++ conf=/var/run/ovirt-hosted-engine-ha/vm.conf
++ host_id=4
++ console=vnc
++ domainType=nfs3
++ spUUID=----
++ sdUUID=8905c9ac-d892-478d-8346-63b8fa1c5763
++ connectionUUID=ce84071b-86a2-4e82-b4d9-06abf23dfbc4
++ ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
++ ca_subject='C=EN, L=Test, O=Test, CN=Test'
++ vdsm_use_ssl=true
++ gateway=10.245.183.1
++ bridge=ovirtmgmt
++ lockspace_volume_UUID=
++ lockspace_image_UUID=
++ metadata_volume_UUID=
++ metadata_image_UUID=
++ conf_volume_UUID=a20d9700-1b9a-41d8-bb4b-f2b7c168104f
++ conf_image_UUID=b5f353f5-9357-4aad-b1a3-751d411e6278
++ iqn=
++ portal=
++ user=
++ password=
++ port=
++ vdsm-client StorageDomain getImages 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=----

++ grep -
++ tr -d ,
++ xargs
+ for i in '$(vdsm-client StorageDomain getImages 
storagedomainID=${sdUUID} storagepoolID=${spUUID} | grep - | tr -d 
'\'','\'' | xargs)'
++ vdsm-client StorageDomain getVolumes 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=---- 
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994

++ grep -
++ tr -d ,
++ xargs
+ for v in '$(vdsm-client StorageDomain getVolumes 
storagedomainID=${sdUUID} storagepoolID=${spUUID} imageID=${i} | grep - 
| tr -d '\'','\'' | xargs)'
++ vdsm-client Volume getInfo 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=---- 
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994 
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55

++ jq '. | select(.description=="hosted-engine.lockspace") | .uuid'
++ xargs
+ lockspace_vol_uuid=
+ [[ ! -z '' ]]
++ vdsm-client Volume getInfo 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=---- 
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994 
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55

++ jq '. | select(.description=="hosted-engine.lockspace") | .image'
++ xargs
+ lockspace_img_uuid=
+ [[ ! -z '' ]]
++ vdsm-client Volume getInfo 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=---- 
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994 
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55

++ jq '. | select(.description=="hosted-engine.metadata") | .uuid'
++ xargs
+ metadata_vol_uuid=
+ [[ ! -z '' ]]
++ vdsm-client Volume getInfo 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=---- 
imageID=83e0550b-0fc3-40b1-955d-b07ebfbb3994 
volumeID=5a26be32-6c5b-4dcc-ac67-5c442f24df55

++ jq '. | select(.description=="hosted-engine.metadata") | .image'
++ xargs
+ metadata_img_uuid=
+ [[ ! -z '' ]]
+ for i in '$(vdsm-client StorageDomain getImages 
storagedomainID=${sdUUID} storagepoolID=${spUUID} | grep - | tr -d 
'\'','\'' | xargs)'
++ vdsm-client StorageDomain getVolumes 
storagedomainID=8905c9ac-d892-478d-8346-63b8fa1c5763 
storagepoolID=---- 
imageID=3abe2f7b-02b9-40a3-8feb-f2809c22c0fb

++ grep -
++ tr -d ,
++ xargs
+ for v in '$(vdsm-client StorageDomain getVolumes 

[ovirt-users] trouble with hosted-engine vm.conf after upgrade to 4.2

2018-06-13 Thread Reznikov Alexei

Hi list!

I upgrade my ovirt test lab from 4.1.9 to 4.2.2, and i have trouble with 
the hosted-engine...


I can not start the hosted engine with my vm.conf...

/hosted-engine --vm-start --vm-conf=/root/vm.conf/

/Command VM.getStats with args {'vmID': 
'a19b9120-4f87-4a04-8cb6-c3e192a98052'} failed://
//(code=1, message=Virtual machine does not exist: {'vmId': 
u'a19b9120-4f87-4a04-8cb6-c3e192a98052'})//

/

VM HE start is Ok, but without my parameter, i change emulated_machine 
to *pc-i440fx-rhel7.5.0* in my vm.conf, but i see this:


/ps ax | grep Hosted/

/./

/r-key.aes -machine 
//*pc-i440fx-rhel7.3.0*//,accel=kvm,usb=off,dump-guest-core=off -cpu 
Nehalem -m size=2097152k,slots=16,maxmem=8388608k -realtime mlock=off 
-smp 2,maxcpus=16,sockets=16,cores=1,/


/./

My ovirt packages:

vdsm-4.20.30-1.el7.x86_64

libvirt-3.9.0-14.el7_5.5.x86_64

qemu-kvm-ev-2.10.0-21.el7_5.3.1.x86_64

Centos 7.5.1804


This is critical for me, please help fix this!


Thanks, Alex.
//

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Q4GN4UCAZTFOR7MDEM4F3JIMEEAWMEF/


Re: [ovirt-users] Unable to put Host into Maintenance mode

2018-02-15 Thread Reznikov Alexei

15.02.2018 14:06, Mark Steele пишет:

Consider manual intervention


vdsClient -s 0 list table on your host

and then vdsClient -s 0 destroy vmID

Regards,

Alex.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf

2018-02-14 Thread Reznikov Alexei

13.02.2018 13:42, Simone Tiraboschi пишет:


Yes, ufortunately you are absolutely right on that: there is a bug there.
As a side effect, hosted-engine --set-shared-config and hosted-engine 
--get-shared-config always refresh the local copy of hosted-engine 
configuration files with the copy on the shared storage and so you 
will always end with host_id=1 in 
/etc/ovirt-hosted-engine/hosted-engine.conf which can lead to SPM 
conflicts.
I'd suggest to manually fix host_id parameter in 
/etc/ovirt-hosted-engine/hosted-engine.conf to its original value 
(double check with engine DB with 'sudo -u postgres psql engine -c 
"SELECT vds_spm_id, vds.vds_name FROM vds"' on the engine VM) to avoid 
that.

https://bugzilla.redhat.com/1543988

Simon, I'm trying to set the right values ... but unfortunately I fail.

[root@h3 ovirt-hosted-engine]# cat hosted-engine.conf | grep conf_
conf_volume_UUID=a20d9700-1b9a-41d8-bb4b-f2b7c168104f
conf_image_UUID=b5f353f5-9357-4aad-b1a3-751d411e6278


[root@h3 ~]# hosted-engine --set-shared-config conf_image_UUID 
b5f353f5-9357-4aad-b1a3-751d411e6278 --type he_conf

Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
  .
  File 
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", 
line 226, in get

    key
KeyError: 'Configuration value not found: 
file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=conf_volume_UUID'


How to fix this, else there is any way to edit hosted-engine.conf on 
shared storage?



Regards,

Alex.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf

2018-02-12 Thread Reznikov Alexei

10.02.2018 00:48, reznikov...@soskol.com пишет:

Simone Tiraboschi писал 2018-02-09 15:17:


It shouldn't happen.
I suspect that something went wrong creating the configuration volume
on the shared storage at the end of the deployment.

Alexei, can both of you attach you hosted-engine-setup logs?
Can you please check what happens on
  hosted-engine --get-shared-config gateway

Thanks



Simone, my ovirt cluster upgrade from 3.4... and i have too old logs.

I'm confused by the execution of the hosted-engine --get-shared-config 
gateway ...
I get the output "gateway: 10.245.183.1, type: he_conf", but my 
current hosted-engine.conf is overwritten by the other 
hosted-engine.conf.


old file:

fqdn = eng.lan
vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749
vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228
storage = ssd.lan:/ovirt
service_start_time = 0
host_id = 3
console = vnc
domainType = nfs3
sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763
connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4
ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject = "C = EN, L = Test, O = Test, CN = Test"
vdsm_use_ssl = true
gateway = 10.245.183.1
bridge = ovirtmgmt
metadata_volume_UUID =
metadata_image_UUID =
lockspace_volume_UUID =
lockspace_image_UUID =

The following are used only for iSCSI storage
iqn =
portal =
user =
password =
port =

conf_volume_UUID = a20d9700-1b9a-41d8-bb4b-f2b7c168104f
conf_image_UUID = b5f353f5-9357-4aad-b1a3-751d411e6278
conf = /var/run/ovirt-hosted-engine-ha/vm.conf
vm_disk_vol_id = cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b
spUUID = ----

new rewrite file

fqdn = eng.lan
vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749
vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228
storage = ssd.lan:/ovirt
conf = /etc/ovirt-hosted-engine/vm.conf
service_start_time = 0
host_id = 3
console = vnc
domainType = nfs3
spUUID = 036f83d7-39f7-48fd-a73a-3c9ffb3dbe6a
sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763
connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4
ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject = "C = EN, L = Test, O = Test, CN = Test"
vdsm_use_ssl = true
gateway = 10.245.183.1
bridge = ovirtmgmt
metadata_volume_UUID =
metadata_image_UUID =
lockspace_volume_UUID =
lockspace_image_UUID =

The following are used only for iSCSI storage
iqn =
portal =
user =
password =
port =

And this in all hosts in cluster!
It seems to me that these are some remnants of versions 3.4, 3.5 ...
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


BUMP

I resolved error "KeyError: 'Configuration value not found: 
file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'".


This error was caused... "/VDSGenericException: VDSErrorException: 
received downloaded data size is wrong (requested 20480, received 
10240)/", the solution is here https://access.redhat.com/solutions/3106231


But in my case there is still a problem with the inappropriate 
parameters in hosted-engine.conf ... I think I should use "hosted-engine 
--set-shared-config" to change the values on the shared storage. This is 
right?


Guru help to solve this.

Regards,

Alex.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf

2018-02-12 Thread Reznikov Alexei

09.02.2018 21:13, Alex K пишет:

Hi,

did you select "Deploy" when adding the new host?

See attached.

Inline image 2

Thanx,
Alex

On Fri, Feb 9, 2018 at 9:53 AM, Reznikov Alexei 
<reznikov...@soskol.com <mailto:reznikov...@soskol.com>> wrote:


Hi all!

After upgrade from ovirt 4.0 to 4.1, a have trouble add to next
HostedEngine host to my cluster via webui... host add succesfully
and become up, but HE not active in this host.

log's from trouble host
# cat agent.log
> KeyError: 'Configuration value not found:
file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'

# cat /etc/ovirt-hosted-engine/hosted-engine.conf
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
host_id=2

log deploy from engine in attach.

trouble host:
ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch
ovirt-host-deploy-1.6.7-1.el7.centos.noarch
vdsm-4.19.45-1.el7.centos.x86_64
CentOS Linux release 7.4.1708 (Core)

engine host:
ovirt-release41-4.1.9-1.el7.centos.noarch
ovirt-engine-4.1.9.1-1.el7.centos.noarch
CentOS Linux release 7.4.1708 (Core)

Please help me fix it.

Thanx, Alex.


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



Yes, off course, i did this.

Thanx, Alex.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine --deploy Waiting for VDSM hardware info error Failed to read hardware information

2017-11-03 Thread Reznikov Alexei

02.11.2017 18:28, Yaniv Kaul пишет:



On Thu, Nov 2, 2017 at 3:17 PM, Reznikov Alexei 
<reznikov...@soskol.com <mailto:reznikov...@soskol.com>> wrote:


Hi list, good day to all!

When I try to start the hosted-engine --deploy, I get a vdsm error:

[root @ h4 yum.repos.d] # hosted-engine --deploy
[INFO] Stage: Initializing
[INFO] Generating a temporary VNC password.
[INFO] Stage: Environment setup
  Continuing will configure this host for serving as
hypervisor and create a VM where you have to install the engine
afterwards.
  Are you sure you want to continue? (Yes, No) [Yes]:
  Configuration files: []
  Log file:

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171102160614-39if1h.log
  Version: otopi-1.4.2 (otopi-1.4.2-1.el7.centos)
[INFO] Hardware supports virtualization
[INFO] Stage: Environment packages setup
[INFO] Stage: Programs detection
[INFO] Stage: Environment setup
[INFO] Waiting for VDSM hardware info
...
[INFO] Waiting for VDSM hardware info
[ERROR] Failed to execute stage 'Environment setup': VDSM did not
start within 120 seconds
[INFO] Stage: Clean up
[INFO] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20171102160113.conf'
[INFO] Stage: Pre-termination
[INFO] Stage: Termination
[ERROR] Hosted Engine deployment failed: this system is not
reliable, please check the issue, fix and redeploy
  Log file is located at

/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171102155902-2g61gd.log

and also i tried 
[root @ h4 ~] # vdsClient -s 0 getVdsHardwareInfo
Failed to read hardware information

My system...
ovirt-release36-3.6.7-3.el7
CentOS Linux release 7.4.1708 (Core)


Fixed in https://gerrit.ovirt.org/#/c/79020/ - but really you should 
use ovirt 4.1.

Y.

vdsm-4.17.32-1.el7.noarch
dmidecode-3.0-5.el7.x86_64
python-dmidecode-3.12.2-1.el7.x86_64


How do I get around this error in my CentOS. Please help me ...

logs in attach.


Alex.



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>



Ok, I applied the patch and it worked!

Thanks very much Yaniv!

Alex.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users