[ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie & frustrated]

2020-10-24 Thread Simone Tiraboschi
Hi Henni,
your issue is just here:
2020-10-16 11:20:59,445+0200 DEBUG var changed: host "localhost" var
"hostname_resolution_output" type "" value: "{
"changed": true,
"cmd": "getent ahosts node01.xyz.co.za | grep STREAM",
"delta": "0:00:00.004671",
"end": "2020-10-16 11:20:59.179399",
"failed": false,
"rc": 0,
"start": "2020-10-16 11:20:59.174728",
"stderr": "",
"stderr_lines": [],
"stdout": "156.38.192.226  STREAM node01.xyz.co.za",
"stdout_lines": [
"156.38.192.226  STREAM node01.xyz.co.za"
]
}"

but then...

2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var
"he_vm_ip_addr" type "" value: ""156.38.192.226""
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var
"he_vm_ip_prefix" type "" value: "29"
...
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var
"he_cloud_init_host_name" type "" value: ""engine01""
2020-10-16 12:15:43,079+0200 DEBUG var changed: host "localhost" var
"he_cloud_init_domain_name" type "" value: ""xyz.co.za""

So,
your host is named node01.xyz.co.za and it resolves to 156.38.192.226,
then you are trying to create a VM named engine01.xyz.co.za and you are
trying to configure it with a static set IPv4 address which is
still 156.38.192.226.
This is enough to explain all the subsequent networking issues.

Please try again using two distinct IP addresses for the node and the
engine VM.

ciao,
Simone


On Sat, Oct 24, 2020 at 8:35 AM  wrote:

> File 1
>
>
> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201024072802-qzvr7t.log
>
>
>
> 2020-10-24 08:10:14,990+0200 ERROR otopi.plugins.gr_he_common.core.misc
> misc._terminate:167 Hosted Engine deployment failed: please check the logs
> for the issue, fix accordingly or re-deploy from scratch.
>
> 2020-10-24 08:10:14,990+0200 DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND \
>
>
>
> Yours Sincerely,
>
>
>
> *Henni *
>
>
>
> *From:* i...@worldhostess.com 
> *Sent:* Saturday, 24 October 2020 14:03
> *To:* 'Yedidyah Bar David' 
> *Cc:* 'Edward Berger' ; 'users' 
> *Subject:* [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days
> [newbie & frustrated]
>
>
>
> Can anyone explain to me how to use the “screen -d -r” option
>
>
>
> Start the deployment script:
>
> # hosted-engine --deploy
>
> To escape the script at any time, use the Ctrl+D keyboard combination to
> abort deployment. In the event of session timeout or connection disruption,
> run screen -d -r to recover the deployment session.
>
>
>
>
>
> Yours Sincerely,
>
>
>
> *Henni *
>
>
>
> *From:* Yedidyah Bar David 
> *Sent:* Wednesday, 21 October 2020 15:04
> *To:* i...@worldhostess.com
> *Cc:* Edward Berger ; users 
> *Subject:* [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days
> [newbie & frustrated]
>
>
>
> On Wed, Oct 21, 2020 at 4:19 AM  wrote:
>
> Did you try to ssh to the engine VM?
> ssh is disconnecting within 1 second to 30 seconds, impossible to
> perform anything.
>
>
>
> ssh to the host? Or to the engine vm?
>
>
>
> If to the host, then you have some severe networking issues, I suggest to
> handle this first.
>
>
>
>
> Command line install " hosted-engine --deploy" it gets to this point (see
> below) and disconnect and thereafter it is disconnecting ssh and http://
> FQDN:9090
>
> "[ INFO  ] TASK [ovirt.hosted_engine_setup : Check engine VM health]"
>
> https "Certificate invalid"
>
> after I run " /usr/sbin/ovirt-hosted-engine-cleanup" I am able to ssh
>
> Yours Sincerely,
>
> Henni
>
> -Original Message-
> From: Yedidyah Bar David 
> Sent: Tuesday, 20 October 2020 14:05
> To: i...@worldhostess.com
> Cc: Edward Berger ; users 
> Subject: [ovirt-users] Re: 20+ Fresh Installs Failing in 20+ days [newbie
> & frustrated]
>
> On Mon, Oct 19, 2020 at 6:00 PM  wrote:
> >
> > I used Cockpit web interface to do the install and I crashed again. I
> think it did not do the final part of the install. There are no other files
> such as "engine-side logs"
>
> Did you try to ssh to the engine VM?
>
> Do you see it running on the host?
>
> >
> > It keep disconnecting from the Cockpit
>
> Due to env issues (communication etc.)? Or oVirt-specific ones (bugs)?
>
> We do have an open bug about allowing to reconnect to cockpit in such
> cases, but it's still NEW for several years now:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1422544
>
> >
> > This is my problem for the last few weeks, it will not complete the
> install and I have no idea why.
>
> If this is your only problem, I suggest to try and work around it somehow:
>
> Either by running the browser through which you connect to cockpit in some
> machine closer (network-wise) to the host you install on (and connect to
> that machine from your laptop using means that allow reconnection, some
> remote desktop or whatever), or using the command line tool/guide, which
> you can/should run inside tmux (and thus easily reconnect if needed).
>
> Good luck 

[ovirt-users] Re: Issues deploying 4.4 with HE on new EPYC hosts

2020-05-29 Thread Simone Tiraboschi
On Fri, May 29, 2020 at 11:39 AM Gianluca Cecchi 
wrote:

> On Fri, May 29, 2020 at 9:34 AM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, May 28, 2020 at 11:56 PM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>> On Thu, May 28, 2020 at 3:09 PM Gianluca Cecchi <
>>> gianluca.cec...@gmail.com> wrote:
>>>
>>> [snip]
>>>
>>>>
>>>>
>>>> for the cluster type in the mean time I was able to change it to "Intel
>>>> Cascadelake Server Family" from web admin gui and now I have to try
>>>> these steps and see if engine starts automatically without manual 
>>>> operations
>>>>
>>>> 1) set global maintenance
>>>> 2) shutdown engine
>>>> 3) exit maintenance
>>>> 4) see if the engine vm starts without the cpu flag
>>>>
>>>>
>>> I confirm that point 4) was successful and engine vm was able to
>>> autostart, after changing cluster type.
>>>
>>
>> As expected,
>> in my opinion now the point is just about understanding why the engine
>> detected your host with the wrong CPU features set.
>>
>> To be fully honest, as you can see in
>> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/README.md#L46
>> , we already have a variable (he_cluster_cpu_type) to force a cluster CPU
>> type from the ansible role but I don't think is exposed in the interactive
>> installer.
>>
>>
>
> Can I artificially set it into a playbook, just to verify correct
> completion of setup workflow or do you think that it will be any way
> overwritten at run time by what detected?
>

The interactive installer is not passing it and the default behaviour is to
omit the parameter if the variable is unset to let the engine wisely detect
and choose.
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/tasks/bootstrap_local_vm/05_add_host.yml#L62
So yes, you can simply inject a custom value

It is not clear in my opinion what does it mean the sentence: "cluster CPU
> type to be used in hosted-engine cluster (the same as HE host or lower)"
> With "as HE host" does it mean what gotten from vdsm capabilities or what?
>

Read this as "as first HE host". This parameter can be useful if you plan
to add older hosts in the future and you prefer to start from the beginning
with a CPU type for the cluster that's less required than what can be
detected from the first one.
I tend to think that in the past the set of CPU features was monotonically
increasing between a CPU family and a newer one. This is less easy now with
all the different security patches.


>
>
>> That one is just a leftover from the install process.
>> It's normally automatically cleaned up as one of the latest actions in
>> the ansible role used for the deployment.
>> I suspect that, due to the wrongly detected CPU type, in your case
>> something failed really close to the end of the deployment and so
>> the leftover: you can safely manually delete it.
>>
>>
>> Yes, the deploy failed because it was not anle to detected final engine
> as up
>
> As asked by Lucia, the Origin of the VM was "External"
> The VM had no disks and no network interfaces. I was able to remove it
> without problems at the moment.
> Thanks,
> Gianluca
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMQU4SPGXJ5ZAOOS2GY2MTTEJI2THQW7/


[ovirt-users] Re: Issues deploying 4.4 with HE on new EPYC hosts

2020-05-29 Thread Simone Tiraboschi
On Thu, May 28, 2020 at 11:56 PM Gianluca Cecchi 
wrote:

> On Thu, May 28, 2020 at 3:09 PM Gianluca Cecchi 
> wrote:
>
> [snip]
>
>>
>>
>> for the cluster type in the mean time I was able to change it to "Intel
>> Cascadelake Server Family" from web admin gui and now I have to try
>> these steps and see if engine starts automatically without manual operations
>>
>> 1) set global maintenance
>> 2) shutdown engine
>> 3) exit maintenance
>> 4) see if the engine vm starts without the cpu flag
>>
>>
> I confirm that point 4) was successful and engine vm was able to
> autostart, after changing cluster type.
>

As expected,
in my opinion now the point is just about understanding why the engine
detected your host with the wrong CPU features set.

To be fully honest, as you can see in
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/README.md#L46
, we already have a variable (he_cluster_cpu_type) to force a cluster CPU
type from the ansible role but I don't think is exposed in the interactive
installer.


> I'm also able to connect to its console from web admin gui
>
> The command line generated now is:
>
> qemu 29450 1 43 23:38 ?00:03:09 /usr/libexec/qemu-kvm
> -name guest=HostedEngine,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-10-HostedEngine/master-key.aes
> -machine pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off -cpu
> Cascadelake-Server,hle=off,rtm=off,arch-capabilities=on -m
> size=16777216k,slots=16,maxmem=67108864k -overcommit mem-lock=off -smp
> 2,maxcpus=32,sockets=16,cores=2,threads=1 -object iothread,id=iothread1
> -numa node,nodeid=0,cpus=0-31,mem=16384 -uuid
> b572d924-b278-41c7-a9da-52c4f590aac1 -smbios
> type=1,manufacturer=oVirt,product=RHEL,version=8-1.1911.0.9.el8,serial=d584e962-5461-4fa5-affa-db413e17590c,uuid=b572d924-b278-41c7-a9da-52c4f590aac1,family=oVirt
> -no-user-config -nodefaults -device sga -chardev
> socket,id=charmonitor,fd=40,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc
> base=2020-05-28T21:38:21,driftfix=slew -global
> kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -global
> ICH9-LPC.disable_s3=1 -global ICH9-LPC.disable_s4=1 -boot strict=on -device
> pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2
> -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1
> -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2
> -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3
> -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4
> -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5
> -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6
> -device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7
> -device
> pcie-root-port,port=0x18,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3
> -device
> pcie-root-port,port=0x19,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1
> -device
> pcie-root-port,port=0x1a,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2
> -device
> pcie-root-port,port=0x1b,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3
> -device
> pcie-root-port,port=0x1c,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4
> -device
> pcie-root-port,port=0x1d,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5
> -device
> pcie-root-port,port=0x1e,chassis=15,id=pci.15,bus=pcie.0,addr=0x3.0x6
> -device
> pcie-root-port,port=0x1f,chassis=16,id=pci.16,bus=pcie.0,addr=0x3.0x7
> -device pcie-root-port,port=0x20,chassis=17,id=pci.17,bus=pcie.0,addr=0x4
> -device pcie-pci-bridge,id=pci.18,bus=pci.1,addr=0x0 -device
> qemu-xhci,p2=8,p3=8,id=ua-b630a65c-8156-4542-b8e8-98b4d2c48f67,bus=pci.4,addr=0x0
> -device
> virtio-scsi-pci,iothread=iothread1,id=ua-b7696ce2-fd8c-4856-8c38-197fc520271b,bus=pci.5,addr=0x0
> -device
> virtio-serial-pci,id=ua-608f9599-30b2-4ee6-a0d3-d5fb588583ad,max_ports=16,bus=pci.3,addr=0x0
> -drive if=none,id=drive-ua-fa671f6c-dc42-4c59-a66d-ccfa3d5d422b,readonly=on
> -device
> ide-cd,bus=ide.2,drive=drive-ua-fa671f6c-dc42-4c59-a66d-ccfa3d5d422b,id=ua-fa671f6c-dc42-4c59-a66d-ccfa3d5d422b,werror=report,rerror=report
> -drive
> file=/var/run/vdsm/storage/3df8f6d4-d572-4d2b-9ab2-8abc456a396f/df02bff9-2c4b-4e14-a0a3-591a84ccaed9/bf435645-2999-4fb2-8d0e-5becab5cf389,format=raw,if=none,id=drive-ua-df02bff9-2c4b-4e14-a0a3-591a84ccaed9,cache=none,aio=threads
> -device
> virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.6,addr=0x0,drive=drive-ua-df02bff9-2c4b-4e14-a0a3-591a84ccaed9,id=ua-df02bff9-2c4b-4e14-a0a3-591a84ccaed9,bootindex=1,write-cache=on,serial=df02bff9-2c4b-4e14-a0a3-591a84ccaed9,werror=stop,rerror=stop
> -netdev
> tap,fds=43:44,id=hostua-b29ca99f-a53e-4de7-8655-b65ef4ba5dc4,vhost=on,vhostfds=45:46
> -device
> 

[ovirt-users] Re: Issues deploying 4.4 with HE on new EPYC hosts

2020-05-28 Thread Simone Tiraboschi
On Thu, May 28, 2020 at 12:02 PM Lucia Jelinkova 
wrote:

> I think you have the same problem as Mark - the cluster is set to use the
> secure variant of the CPU type but your host does not support all the
> necessary flags.
>
> Intel Cascadelake Server Family - the VM is run with
> Cascadelake-Server,-hle,-rtm,+arch-capabilities
> Secure Intel Cascadelake Server Family - the VM is run with
> Cascadelake-Server,+md-clear,+mds-no,-hle,-rtm,+tsx-ctrl,+arch-capabilities
>
> The cluster should be set to Intel Cascadelake Server Family. I am not
> familiar with the HE setup process - have you specified the cluster CPU
> type manually or is it auto-assigned?
>

No, the user is not allowed to force a specific CPU type for the cluster:
hosted-engine-setup process will simply register the first host into the
Default cluster and the engine will detect the CPU type and configure the
cluster according to the detected CPU.
I tend to think that this is a bug in the code that chooses the CPU
type for the cluster according to what the host reports.


>
> Lucia
>
> On Thu, May 28, 2020 at 11:32 AM Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Thu, May 28, 2020 at 11:00 AM Gianluca Cecchi <
>> gianluca.cec...@gmail.com> wrote:
>>
>>>
>>>
>>> any input to solve and at least try some features of 4.4 on this hw env?
>>>
>>> Thanks,
>>> Gianluca
>>>
>>>
>> it seems I was able to have it recognized this way:
>>
>> strip these lines from he.xml
>>
>> > 1
>> > unsupported configuration: unknown CPU
>> feature: tsx-ctrl
>> > 1
>> 13a17
>> > > type="float">1590653346.0201075
>> 246a251,264
>> > > passwdValidTo='1970-01-01T00:00:01'>
>> >   
>> > 
>> > > passwdValidTo='1970-01-01T00:00:01'>
>> >   
>> >   
>> >   
>> >   
>> >   
>> >   
>> >   
>> >   
>> >   
>> > 
>>
>> I have to understand what to use for graphics but it is not important at
>> the moment
>> After failure of deployment the link to the engine storage domain under
>> /var/run/vdsm/storage/3df8f6d4-d572-4d2b-9ab2-8abc456a396f/ has been
>> deleted.
>> So I go there and create it:
>>
>> [root@novirt2 ~]# cd
>> /var/run/vdsm/storage/3df8f6d4-d572-4d2b-9ab2-8abc456a396f/
>>
>> [root@novirt2 ~]# ln -s
>> /rhev/data-center/mnt/glusterSD/novirt2st.storage.local\:_engine/3df8f6d4-d572-4d2b-9ab2-8abc456a396f/images/df02bff9-2c4b-4e14-a0a3-591a84ccaed9
>> df02bff9-2c4b-4e14-a0a3-591a84ccaed9
>>
>> Start VM
>> [root@novirt2 ~]# virsh -c
>> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf create
>> he.xml
>> Domain HostedEngine created from he.xml
>>
>> Access its webadmin gui and now all is as expected, with only one engine
>> vm...
>> Volumes for data and vmstore are there, but not the storage domains, but
>> I can create them without problems
>>
>> Also if I exit the global maintenance the state passes from
>> ReinitializeFSM to EngineUp
>>
>> [root@novirt2 ~]# hosted-engine --vm-status
>>
>>
>> --== Host novirt2.example.net (id: 1) status ==--
>>
>> Host ID: 1
>> Host timestamp : 38553
>> Score  : 3400
>> Engine status  : {"vm": "up", "health": "good",
>> "detail": "Up"}
>> Hostname   : novirt2.example.net
>> Local maintenance  : False
>> stopped: False
>> crc32  : 5a3b40e1
>> conf_on_shared_storage : True
>> local_conf_timestamp   : 38553
>> Status up-to-date  : True
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=38553 (Thu May 28 11:30:51 2020)
>> host-id=1
>> score=3400
>> vm_conf_refresh_time=38553 (Thu May 28 11:30:51 2020)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=EngineUp
>> stopped=False
>> [root@novirt2 ~]#
>>
>> Now go to test further functionalities.
>>
>>
>> Gianluca
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OC6B4AF52T4T4NRWEPPNFO6MJKOE2L4/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KMXYC5H2SFP4TSXVVVYKGC4YU55CI53L/


[ovirt-users] Re: Getting the same bug in 4.4 as I did in 4.3.. brand new install 100% repeatable for me.

2020-05-22 Thread Simone Tiraboschi
On Fri, May 22, 2020 at 6:13 PM  wrote:

> MainThread::WARNING::2020-05-21
> 14:22:55,067::storage_broker::100::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__)
> Can't connect vdsm storage: 'NoneType' object has no attribute
> 'close_connections'
> MainThread::ERROR::2020-05-21
> 14:22:55,067::broker::69::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> Failed initializing the broker: 'NoneType' object has no attribute
> 'close_connections'
>

This is probably the side effect of a previous error so self._listener is
None but i'm pretty sure that something failed before trying to initialize
it.
Can you please share the whole log file?


> MainThread::ERROR::2020-05-21
> 14:22:55,078::broker::71::ovirt_hosted_engine_ha.broker.broker.Broker::(run)
> Traceback (most recent call last):
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 64, in run
> self._storage_broker_instance = self._get_storage_broker()
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 143, in _get_storage_broker
> return storage_broker.StorageBroker()
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py",
> line 97, in __init__
> self._backend.connect()
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py",
> line 375, in connect
> sserver.connect_storage_server()
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py",
> line 356, in connect_storage_server
> conList, storageType = self._get_conlist(cli, normalize_path=True)
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py",
> line 306, in _get_conlist
> self._validate_pre_connected_path(cli, path)
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py",
> line 155, in _validate_pre_connected_path
> cli.StorageDomain.getInfo(storagedomainID=self._sdUUID)
>   File "/usr/lib/python3.6/site-packages/vdsm/client.py", line 289, in
> _call
> req, timeout=timeout, flow_id=self._flow_id)
>   File "/usr/lib/python3.6/site-packages/yajsonrpc/jsonrpcclient.py", line
> 91, in call
> call.wait(kwargs.get('timeout', CALL_TIMEOUT))
>   File "/usr/lib/python3.6/site-packages/yajsonrpc/jsonrpcclient.py", line
> 290, in wait
> self._ev.wait(timeout)
>   File "/usr/lib64/python3.6/threading.py", line 551, in wait
> signaled = self._cond.wait(timeout)
>   File "/usr/lib64/python3.6/threading.py", line 299, in wait
> gotit = waiter.acquire(True, timeout)
>   File
> "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 114, in _handle_quit
> down_thread = threading.Thread(target=self._listener.close_connections)
> AttributeError: 'NoneType' object has no attribute 'close_connections'
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NRSYP3LCIYXFVC3K2U6ESXEX6YK5CTPY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WSOR5UOUOBGX7JVQ4LOMHWFVISNBM3T3/


[ovirt-users] Re: Getting the same bug in 4.4 as I did in 4.3.. brand new install 100% repeatable for me.

2020-05-22 Thread Simone Tiraboschi
On Fri, May 22, 2020 at 5:28 PM  wrote:

> Hmm.. wonder why I am so special, as this is 100% repeatable every time
> for me, and I can't get past this.  Anyone I could really use some help!
>

Hi,
can you please check if you have something strange under
/var/log/ovirt-hosted-engine-ha/broker.log ?


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TDU4CBH7BSMCCQVJUDSNBUYEFCDDOQZ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BNDWFU6SAZQ7CCM24LPC53BV6KLC3JGU/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-10-28 Thread Simone Tiraboschi
On Sun, Oct 27, 2019 at 8:44 AM Michael  wrote:

> Hi just just encountered the same error in my lap,
>
> complete new NFS server new ovirt node installation and first deployment
> of Engine
>
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
> HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
> is 400."}
>
> My logs says the same thing
>

Can you please check in engine.log or vdsm.log if you have more detail
about this failure?
Please double check it with:
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html



>
>
> - Michael H
> - nerdaler...@gmail.com
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DQUFT2ZW6ZPS3GZPPH6XIAH7NDYCVLNB/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IX42D7D3I7RRC6UZMAOZHDRRCC4YI3IN/


[ovirt-users] Re: HE deployment failing - FAILED! => {"changed": false, "msg": "network default not found"}

2019-10-18 Thread Simone Tiraboschi
On Fri, Oct 18, 2019 at 3:46 PM Parth Dhanjal  wrote:

> Hey!
>
> I am trying a static IP deployment.
> But the HE deployment fails during the VM preparation step and throws the
> following error -
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Parse libvirt default network
> configuration]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "network default not found"}
>
> I tried restarting the network service, but the error is still persisting.
>
> Upon checking the logs, I can find the following
> https://pastebin.com/MB6GrLKA
>
> Has anyone else faced this issue?
>

Maybe in the past you destroyed/removed my mistake the default libvirt
network on your host.
Can you please try reinstalling libvirt-daemon-config-network ?


>
> Regards
> Parth Dhanjal
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KXLRUS3VU7VSDBFIZB4FTDSTGW42XFY5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GJZVOFGQUFS2YXSUSTNT5K6CNKJWIXN5/


[ovirt-users] Re: [oVirt HC] Gluster traffic still flows on mgmt even after choosing a different Gluster nw

2019-10-16 Thread Simone Tiraboschi
On Wed, Oct 16, 2019 at 2:16 PM Stefano Stagnaro <
stefa...@prismatelecomtesting.com> wrote:

> Hi,
>
> I've deployed an oVirt HC starting with latest oVirt Node 4.3.6; this is
> my simple network plan (FQDNs only resolves the front-end addresses):
>
> front-end   back-end
> engine.ovirt192.168.110.10
> node1.ovirt 192.168.110.11  192.168.210.11
> node2.ovirt 192.168.110.12  192.168.210.12
> node3.ovirt 192.168.110.13  192.168.210.13
>
>
The storage traffic allocation over multiple subnets is implicitly set by
name resolution and routing rules.

Please use two distinct hostnames for each host: the first one should
resolve only as an address on the management network and the second one as
an address on the storage network.

In the cockpit wizard for the hyperconverged deployment you will be
prompted twice about the name of the three hosts: on the first tab (named
'Hosts') use the three host-names that resolves on the storage network.
On the second tab ('Additional Hosts') please use the hostnames that are
going to be resolved over the management network.


at the end I followed the RHHI-V 1.6 Deployment Guide where, at chapter 9
> [1], it suggests to create a logical network for Gluster traffic. Now I can
> see, indeed, back-end addresses added in the address pool:
>
> [root@node1 ~]# gluster peer status
> Number of Peers: 2
>
> Hostname: node3.ovirt
> Uuid: 3fe33e8b-d073-4d7a-8bda-441c42317c92
> State: Peer in Cluster (Connected)
> Other names:
> 192.168.210.13
>
> Hostname: node2.ovirt
> Uuid: a95a9233-203d-4280-92b9-04217fa338d8
> State: Peer in Cluster (Connected)
> Other names:
> 192.168.210.12
>
> The problem is that the Gluster traffic seems still to flow on the
> management interfaces:
>
> [root@node1 ~]# tcpdump -i ovirtmgmt portrange 49152-49664
>
>
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on ovirtmgmt, link-type EN10MB (Ethernet), capture size 262144
> bytes
> 14:04:58.746574 IP node2.ovirt.49129 > node1.ovirt.49153: Flags [.], ack
> 484303246, win 18338, options [nop,nop,TS val 6760049 ecr 6760932], length 0
> 14:04:58.753050 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 2507489191:2507489347, ack 2889633200, win 20874, options [nop,nop,TS val
> 6760055 ecr 6757892], length 156
> 14:04:58.753131 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 156:312, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753142 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 312:468, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753148 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 468:624, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753203 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 624:780, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753216 IP node2.ovirt.49131 > node1.ovirt.49152: Flags [P.], seq
> 780:936, ack 1, win 20874, options [nop,nop,TS val 6760055 ecr 6757892],
> length 156
> 14:04:58.753231 IP node1.ovirt.49152 > node2.ovirt.49131: Flags [.], ack
> 936, win 15566, options [nop,nop,TS val 6760978 ecr 6760055], length 0
> ...
>
> and no yet to the eth1 I dedicated to gluster:
>
> [root@node1 ~]# tcpdump -i eth1 portrange 49152-49664
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
>
> What am I missing here? What can I do to force the Gluster traffic to
> really flow on dedicated Gluster network?
>
> Thank you,
> Stefano.
>
> [1] https://red.ht/2MiZ4Ge
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U3ZAM3DGE3EBGCWBIM37PTKFNULN2KTF/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGQG6SIWR7R4B22HDZNBG53K2AWBF3CS/


[ovirt-users] Re: Ovirt 4.3.5/6 automated install fails

2019-10-11 Thread Simone Tiraboschi
On Fri, Oct 11, 2019 at 9:01 AM  wrote:

> I think I finally found the issue, it it was related to
> tasks/add_hosts_storage_domains.yml where hosts: localhost should be set to
> hosts: host1.example.other.com
>
> As mentioned all worked except for one warning:
>
> TASK [ovirt.hosted_engine_setup : Always revoke the SSO token]
>
> ***
> fatal: [host1.example.other.com]: FAILED! => {"changed": false, "msg":
> "You must specify either 'url' or 'hostname'."}
> ...ignoring
>
> host1.example.other.com : ok=418  changed=150  unreachable=0failed=0
>   skipped=220  rescued=0ignored=1
>
> I researched a bit about this error and found
> https://github.com/ansible/ansible/issues/53379 but not sure if this is
> still the case.
>
>
Yes, it's that one.

AFAIK it's not going to fail the whole deployment because we added an
ignore over that kind of errors.
AFAIK it's not systematic.


> any feedback is welcome.
>

> thank you,
>
> Adrian
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UN5OWABCO4FCYTGBUXJI3HAMNSOWPPSW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KE5XE4TJVR4XOIL7SNWBZBYMJDCM5BLS/


[ovirt-users] Re: is it possible to add host on which ovirt is installed ?

2019-10-09 Thread Simone Tiraboschi
On Tue, Oct 8, 2019 at 12:48 PM  wrote:

> Hi.
>
> I am confused now.
>
> removed all previous configuration and follwed instruction:
>
>
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged.html
>
> Than I read:
>
> Deploying on oVirt Node based Hosts
> oVirt Node contains all the required packages to set up the hyperconverged
> environment. Refer to oVirt Nodes for instructions on installing oVirt Node
> on the host. You can proceed to setting up the hyperconverged environment
> if you have an oVirt Node based host.
>
> What I found, is that oVirt Node is a separate installation for separate
> machine.
>
> So now I am confused :(
>
>
> What exacly steps I need to perform, to have GUI in which I can create
> virtual machines, having only one server available :) ?
>

Normally in oVirt you are going to have a central manager, the engine, plus
N virtualization hosts running VMs storage on external storage.
Using it with a single host with no external storage is a kind of
degenerate case.

I'd still suggest the single node hyper-converged hosted-engine deployment
that's is designed exactly for your use case.

You are going to install oVirt node on your bare metal machine: oVirt node
is a kind of minimal Centos based OS deployment for oVirt purposes.
hosted-engine deployment will create a first VM there based on a ready to
use appliance and oVirt engine will be executed there.
The storage will be exposed by the host itself via gluster.

The easiest deployment path is:
1. install oVirt node on your bare metal server from an ISO:
https://resources.ovirt.org/pub/ovirt-4.3/iso/ovirt-node-ng-installer/4.3.6-2019092614/el7/ovirt-node-ng-installer-4.3.6-2019092614.el7.iso
2. connect to its cockpit UI: on https://host_address:9090/
3. run single host hyper-converged depoyment from hosted-engine tab from
there



>
> Thanks in advance.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OVLMMA2JJYP3HIU4W6KDSYMKE6WFYRTP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCAGRKORTB4ODY6QWTTWRYWKLEEBHTUM/


[ovirt-users] Re: Ovirt 4.3.5/6 automated install fails

2019-10-09 Thread Simone Tiraboschi
On Tue, Oct 8, 2019 at 6:12 PM  wrote:

> I am having issues while trying to deploy an automated HC install
> 
> ansible 2.8.2
>   config file = /etc/ansible/ansible.cfg
>   configured module search path = [u'/root/.ansible/plugins/modules',
> u'/usr/share/ansible/plugins/modules']
>   ansible python module location = /usr/lib/python2.7/site-packages/ansible
>   executable location = /usr/bin/ansible
>   python version = 2.7.5 (default, Jun 20 2019, 20:27:34) [GCC 4.8.5
> 20150623 (Red Hat 4.8.5-36)]
>
>
> Ovirt node = 4.3.5 or 4.3.5
>
>
> 1.- Followed the procedure listed in the following link:
>
> https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/html/automating_rhhi_for_virtualization_deployment/setting-deployment-variables
>
> 2.-Ran the playbook within Path:
> /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment:
> ansible-playbook -i gluster_inventory.yml hc_deployment.yml
> --extra-vars='@he_gluster_vars.json'
>
> 3.-Json file
> [root@host1 hc-ansible-deployment]# cat he_gluster_vars.json
> {
>   "he_appliance_password": "changeme",
>   "he_admin_password": "changeme",
>   "he_domain_type": "glusterfs",
>   "he_fqdn": "ovirt-engine.example.com",
>   "he_vm_mac_addr": "00:16:fe:05:e1:ee",
>   "he_default_gateway": "10.10.10.1",
>
  "he_mgmt_network": "ovirtmgmt",
>   "he_ansible_host_name": "host1.example.com",
>   "he_storage_domain_name": "HostedEngine",
>   "he_storage_domain_path": "/engine",
>   "he_storage_domain_addr": "vmm10.virt.aid3p",
>   "he_mount_options": "backup-volfile-servers=host2.example.com:h
> ost3.example.com",
>   "he_bridge_if": "eno49",
>   "he_enable_hc_gluster_service": true,
>   "he_mem_size_MB": "32768",
>   "he_cluster": "Default",
> }
>
> Error:
> task path:
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:8
> fatal: [localhost]: FAILED! => {
> "msg": "The task includes an option with an undefined variable. The
> error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute
> 'he_host_ip'\n\nThe error appears to be in
> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
> line 8, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n  timeout:
> 180\n  - name: Add an entry for this host on /etc/hosts on the local VM\n
>   ^ here\n"
> }
>
>
> task path:
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:8
> fatal: [localhost]: FAILED! => {
> "msg": "The task includes an option with an undefined variable. The
> error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute
> 'he_host_ip'\n\nThe error appears to be in
> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
> line 8, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n  timeout:
> 180\n  - name: Add an entry for this host on /etc/hosts on the local VM\n
>   ^ here\n"
> }
>
>
> task path:
> /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml:57
> fatal: [localhost]: FAILED! => {
> "msg": "The task includes an option with an undefined variable. The
> error was: 'ansible.vars.hostvars.HostVarsVars object' has no attribute
> 'he_local_vm_dir'\n\nThe error appears to be in
> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/03_engine_initial_tasks.yml':
> line 57, column 7, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n  delegate_to:
> \"{{ he_ansible_host_name }}\"\n- name: Get local VM dir path\n  ^
> here\n"
> }
> PLAY RECAP
> **
> localhost  : ok=184  changed=56   unreachable=0
> failed=2skipped=65   rescued=0ignored=0
> host1.example.com   : ok=34   changed=17   unreachable=0
> failed=0skipped=70   rescued=0ignored=0
> host2.example.com   : ok=37   changed=20   unreachable=0
> failed=0skipped=100  rescued=0ignored=0
> host3.example.com   : ok=34   changed=17   unreachable=0
> failed=0skipped=70   rescued=0ignored=0
>
> So it seems that for some strange reason it is not getting the correct
> he_host_ip and  he_local_vm_dir.
>
> has anybody else encountered this error?
>

Hi Adrian,
you want to configure your engine VM to get an IP address from DHCP, right?

Are you sure that the value of he_ansible_host_name exactly matches the
address of your first host as used in your inventory file?
In {{ 

[ovirt-users] Re: is it possible to add host on which ovirt is installed ?

2019-10-02 Thread Simone Tiraboschi
On Wed, Oct 2, 2019 at 11:14 PM  wrote:

> Hi.
>
> I am trying to do some lab.
>
> I have one Unix server, on which I installed oVirt package. I have already
> access to console.
>
> I would like to create some virtual servers, but to do so, I need to have
> datacenter up. I checked in docummentation, that to have it running, I need
> to have:
>
> - cluster - which I already made
> - host - here I have problem, because I am trying to add the same host on
> which ovirt is running, but I get error without any notification
> - storage domain -  which I guess is created automatically during ovirt
> installation.
>
> Any ideas :) ?
>

I'd suggest to deploy it as a single host hyperconverged gluster deployment.
You can easily do it connecting to its cockpit web ui.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N3CR3YBKKZGP7WGKADUYSRRIIHPDHPLT/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VJ2SPIZRSSKF2AVOWTYWAIOTIHV5FIZZ/


[ovirt-users] Re: Linked Clones?

2019-09-19 Thread Simone Tiraboschi
On Thu, Sep 19, 2019 at 9:24 AM  wrote:

> Hi,
>
> I am approaching and studying oVirt in order to propose the solution to a
> customer as a replacement for a commercial solution they have now.
> They only need Desktop virtualization.
> Sorry for the silly question, but I can't find a way to deploy a VM
> (template) to users as a "linked-clone", meaning that the users' image
> still refers to the original image but modification are written (and
> afterwards read) from a new location. This technique is called
> Copy-on-write.
> Can this be achieved with oVirt?
>

Hi,
oVirt provides VM Pools:
https://www.ovirt.org/documentation/admin-guide/chap-Pools.html

but the VMs in pool are designed to be stateless and so, on that user case,
the user specific data should stored on an external (not on the VM disk)
storage area like an NFS or SMB/CIFS share mounted inside the VM at user
login time.

Probably you can achieve something closer to your initial idea relying on
oVirt-CinderLib integration over a capable SAN or a Ceph storage.
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
Please be aware that in 4.3 cinder-lib integration is still a
"tech-preview" grade feature and the design of the whole solution will
probably require a relevant integration/configuration effort on your side
to correctly achieve copy-on-write behaviour on storage side.



>
> Fabio
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XITFLEJDICOFY3KQTBHFASBF7KANJCV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2KFQCPLGDL6NEBLVLMO6WG4KMXURNJEI/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Simone Tiraboschi
On Wed, Sep 4, 2019 at 6:15 PM Dionysis K  wrote:

> if you see the engine zip log then
>
> i need to add that whatever value i am entering at the memory size field
> does not update it it still leaves it back to 5120 for some reason
>

And indeed in the logs I see that:
2019-09-04 18:04:56,535+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-8) [564cc40c] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory:
changed the amount of memory on VM HostedEngine from 5120 to 5120
2019-09-04 18:04:56,585+03 INFO
 [org.ovirt.engine.core.bll.HotSetAmountOfMemoryCommand] (default task-8)
[6426f726] Running command: HotSetAmountOfMemoryCommand internal: true.
Entities affected :  ID: f058c188-43c5-4685-88e0-c88b3c9abd01 Type:
VMAction group EDIT_VM_PROPERTIES with role type USER
2019-09-04 18:04:56,587+03 INFO
 [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default
task-8) [6426f726] START, SetAmountOfMemoryVDSCommand(HostName = Deimos,
Params:{hostId='0fafebe7-14e8-4e4f-916c-d56b7b5150f8',
vmId='f058c188-43c5-4685-88e0-c88b3c9abd01',
memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='0c7de7ff-e68e-42d1-a807-4e3ee92201e8',
vmId='f058c188-43c5-4685-88e0-c88b3c9abd01'}', device='memory',
type='MEMORY', specParams='[node=0, size=2944]', address='',
managed='true', plugged='true', readOnly='false',
deviceAlias='ua-0c7de7ff-e68e-42d1-a807-4e3ee92201e8',
customProperties='null', snapshotId='null', logicalName='null',
hostDevice='null'}', minAllocatedMem='5461'}), log id: 4d62125f
2019-09-04 18:04:56,641+03 INFO
 [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default
task-8) [6426f726] FINISH, SetAmountOfMemoryVDSCommand, return: , log id:
4d62125f
2019-09-04 18:04:56,669+03 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-8) [6426f726] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory:
changed the amount of memory on VM HostedEngine from 5120 to 5120

On the other side I also see:
2019-09-04 18:04:17,549+03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-66) [] EVENT_ID:
VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host Deimos was
guaranteed 5461 MB but currently has 5120 MB

Can you please try updating also the guaranteed memory value in the UI?



> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WH253DT22NZL43KJAGMTDOUUCOGSSXEG/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N42BQZMDOHIR425NNWNYVVQLPJIEMS5Y/


[ovirt-users] Re: Cannot Increase Hosted Engine VM Memory

2019-09-04 Thread Simone Tiraboschi
On Wed, Sep 4, 2019 at 4:30 PM Dionysis K  wrote:

> Hello I am having the same problem i cannot update the memory engine
> configuration
>
> i even updated the engine from 4.2.8 to 4.3.5 and the problem is still
> persisting!
>
> how we can find what its  going on ?
>

Hi,
can you please attach engine.log ?


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDG22BXGTD2EPHVLTH7XNBBSTYRB7IML/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4VGETHV2ZZKUGSPOYBBHXZURUGLOUHK/


[ovirt-users] Re: hosted engine setup, iSCSI no LUNs shown

2019-08-21 Thread Simone Tiraboschi
Hi,
can you please share /var/log/vdsm/vdsm.log from the deployment time?

On Tue, Aug 20, 2019 at 6:35 PM  wrote:

> I'm trying to setup the hosted engine on top of iSCSI storage. It
> successfully logs in and gets the target, however the process errors out
> claiming there are no LUNs. But if you look on the host, the disks were
> added to the system.
>
> [ INFO  ] TASK [ovirt.hosted_engine_setup : iSCSI discover with REST API]
> [ INFO  ] ok: [localhost]
>   The following targets have been found:
> [1] iqn.2001-04.com.billdurr.durrnet.vm-int:vmdata
> TPGT: 1, portals:
> 192.168.47.10:3260
>
>   Please select a target (1) [1]: 1
> [ INFO  ] Getting iSCSI LUNs list
> ...
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Get iSCSI LUNs]
> [ INFO  ] ok: [localhost]
> [ ERROR ] Cannot find any LUN on the selected target
> [ ERROR ] Unable to get target list
>
> Here's what the config in targetcli looks like
> [root@vm1 ~]# targetcli ls
> o- / .
> [...]
>   o- backstores ..
> [...]
>   | o- block .. [Storage
> Objects: 2]
>   | | o- p_iscsi_lun1 .. [/dev/drbd0 (62.0GiB) write-thru
> activated]
>   | | | o- alua ... [ALUA
> Groups: 1]
>   | | |   o- default_tg_pt_gp ... [ALUA state:
> Active/optimized]
>   | | o- p_iscsi_lun2 . [/dev/drbd1 (310.6GiB) write-thru
> activated]
>   | |   o- alua ... [ALUA
> Groups: 1]
>   | | o- default_tg_pt_gp ... [ALUA state:
> Active/optimized]
>   | o- fileio . [Storage
> Objects: 0]
>   | o- pscsi .. [Storage
> Objects: 0]
>   | o- ramdisk  [Storage
> Objects: 0]
>   o- iscsi 
> [Targets: 1]
>   | o- iqn.2001-04.com.billdurr.durrnet.vm-int:vmdata 
> [TPGs: 1]
>   |   o- tpg1 .. [gen-acls,
> no-auth]
>   | o- acls ..
> [ACLs: 0]
>   | o- luns ..
> [LUNs: 2]
>   | | o- lun0 . [block/p_iscsi_lun1 (/dev/drbd0)
> (default_tg_pt_gp)]
>   | | o- lun1 . [block/p_iscsi_lun2 (/dev/drbd1)
> (default_tg_pt_gp)]
>   | o- portals 
> [Portals: 1]
>   |   o- 192.168.47.10:3260
> ... [OK]
>   o- loopback .
> [Targets: 0]
>   o- srpt .
> [Targets: 0]
>
> The two LUNs show up on the host after the hosted engine setup tries to
> enumerate the LUNs for the target
> [root@vm1 ~]# lsscsi
> [0:0:0:0]storage HP   P420i8.32  -
> [0:1:0:0]diskHP   LOGICAL VOLUME   8.32  /dev/sda
> [0:1:0:1]diskHP   LOGICAL VOLUME   8.32  /dev/sdb
> [0:1:0:2]diskHP   LOGICAL VOLUME   8.32  /dev/sdc
> [11:0:0:0]   diskLIO-ORG  p_iscsi_lun1 4.0   /dev/sdd
> [11:0:0:1]   diskLIO-ORG  p_iscsi_lun2 4.0   /dev/sde
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MGPCIAT7QTH7A7EHIC2RBDTZTH6HB4IH/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G67SIUEDKDJUR7Z3XW2AM4GFZFQZ4GW4/


[ovirt-users] Re: Hosted Engine on seperate L2 network from nodes?

2019-08-19 Thread Simone Tiraboschi
On Fri, Aug 16, 2019 at 6:12 PM Dan Poltawski 
wrote:

> For some security requirements, I’ve been asked if it’s possible to
> segregate the hosted engine from the physical nodes, with specific
> firewalling for access to do node/ storage operations (I’m using managed
> block storage).
>
>
>
> Is this an approach others us, or is it better practice and just ensure
> the nodes and engine are all sharing the same network?
>

The hosted-engine needs to communicate with the hosts for management
operations; this happens over a logical network called management network.
The hosts have to communicate with the storage, you can create an
additional logical network with different addressing for that; the engine
doesn't need any direct access to the storage which is always mediated by
the hosts.

For simplicity, if feasible in your environment, I'd suggest to create an
additional storage dedicated logical network on your hosts instead of
playing with manual injected firewall rules over the management one.



>
>
> Thanks,
>
>
>
> dan
> --
>
> The Networking People (TNP) Limited. Registered office: Network House,
> Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
> number: 07667393
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4SAO7NAQGEA326YL4FRQQJHRO2NMFTK/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QKDP3JOG27CFKETBZUTDARXQLFVUBMTG/


[ovirt-users] Re: hosted engine installation / multipath / iscsi

2019-08-07 Thread Simone Tiraboschi
On Wed, Jul 31, 2019 at 4:41 PM Michael Frank  wrote:

> Hi,
>
>  since several days i try to install the hosted engine initially to an
> iscsi multipath device without success.
> Some information on the environment:
> - Version 4.3.3
> - using two 10gbe interfaces as single bond for the ovirtmgmt interface
> - using two 10gbe storage interfaces on each hypervisor for iscsi storage
> -- each storage interface is configured without any bonding, etc
> -- each storage interface lives in a separate vlan were also the iscsi
> Portals/target are available, the iscsi portals have 4x10ge interfaces
> each, (2 in vlan xx and 2 interfaces in vlan yy )
> -- so; each storage interface is connected to two iscsi Portals via 4
> interfaces
>
> The documentation here is for me unclear:
>
> https://ovirt.org/documentation/self-hosted/chap-Deploying_Self-Hosted_Engine.html
> >Note: To specify more than one iSCSI target, you must enable multipathing
> before deploying the self-hosted engine. There is also a Multipath >Helper
> tool that generates a script to install and configure multipath with
> different options.
>
> This indicates for me that it should be possbile to install the HE
> directly on the /dev/mapper/mpath device which is availibale when I have
> prepared the host accordingly before installing the HE (log in to multiple
> iscsi targets, create proper multipath,conf, etc) - right ?
>
> I login to the two iscsi portals and get in sum 8 pathes, 4 from each
> interface and iscsi target.
> Basically I have then the mpath device on the hypervisor available and i
> can  mount the mpath device and put data on it.
> In the cockpit interface the mount can also be activated and is recognized
> correctly.
> multipathd -ll and lsblk looks good. Everything seems to be fine.
>
> But when I run the "hosted-engine" --deploy, the last option while running
> the assistant is to enter the iscsi data.
> So, basically i just want to define my mpath device - when entering the
> data (ip, port)for the iscsi Portal I can see the 4 pathes of the single
> hosted Engine target,
> and when i choose the path where the "lun" is finally available it fails.
> I think in general this option is not that what i want to have
> here for using the multipath device.
>
> I' m lost - what is the usual way to install the HE on a multipath device ?
>

Sorry for the delay, I missed this thread.

>From ovirt-hosted-engine-setup you can configure the iSCSI storage domain,
exposed by a single iSCSI target, to be accessed over multiple portals in a
single portal group.
Once you have a running engine you can eventually complete the
configuration creating an iSCSI bond from there.
Follow this guide for that:
https://ovirt.org/documentation/admin-guide/chap-Storage.html#configuring-iscsi-multipathing

Let's now focus on the first part.
On your SAN you should create more than one iSCSI portal.
Then you should group them in a single iSCSI target portal group.
You iSCSI target should be configured to be exposed over the whole iSCSI
target portal group.
Then you have to create a LUN for the hosted-engine storage domain and
associate it with that iSCSI target.

Now, on hosted-engine-setup (via CLI or via cokpit) you should enter the IP
address or one of the iSCSI portals (and eventually one username/password
couple for the iSCSI discovery).
The iSCSI discovery process will report back your iSCSI targets and the
address of other iSCSI portals in the same iSCSI target portal group; each
of them will be a path so if you have more than one iSCSI portal in a
single iSCSI target portal group you will have multipath.

Next step is choosing one of the listed iSCSI targets, then
hosted-engine-setup will list the LUNs there and you will be able to choose
one of them to be used for the hosted-engine storage domain.



>
> Do i have to change the configuration of the storage interfaces or the
> iscsi network design?
> Did I missed something obvious ?
> Can I put in my multipath data into the answerfile to get rid of the last
> step of the assistant ?
> Can I use the related ansible role for specify the Mpath device which is
> available when activating the multipath service ?
>
> Is it not possible in general ?? :
> https://bugzilla.redhat.com/show_bug.cgi?id=1193961
>
> Sorry in advance for the long mail  1!^^
>
> br,
> michael
>
> Sent from a mobile device
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QRL2FYUD66C5J2RKC4UJZP4OQJWXWSB5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code 

[ovirt-users] Re: Moving Hosted Engine Storage

2019-07-16 Thread Simone Tiraboschi
On Tue, Jul 16, 2019 at 11:43 AM Dan Poltawski 
wrote:

> Hello,
>
> I've read various posts[1] on this list, but I must confess that I am
> still not entirely clear on the process for moving a hosted engine to a
> new storage domain (in my case I want to move from an existing NFS
> server to a new iSCSI target). Some of what i've read make me slightly
> concerned it's a risky operation and in my situation I am just
> prototyping and retaining the existing engine will be saving some time
> rather than mission critical.
>
> Is anyone able to outline the steps to me?
>

The flow is:
1. set hosted-engine global maintenance mode
2. choose one of the existing hosts and set it to maintenance mode from the
engine
3. take a backup of your current engine with engine-backup
4. copy the backup file to your host
5. on the host in maintenance mode run:
hosted-engine --deploy --restore-from-file=backup.tar.gz
6. at the end you will have a new engine VM created from your backup file,
the previous hosted-engine storage domain will be still visible (although
renamed) so that you can eventually migrate additional VMs created there by
mistake; at the end you can remove it
7. one host at at time, set other hosted-engine hosts to maintenance mode
and choose reinstall being sure to choose to deploy hosted-engine
that's all.


>
> thanks,
>
> Dan
>
>
>
> [1] https://lists.ovirt.org/pipermail/users/2017-June/082466.html
>
> 
>
> The Networking People (TNP) Limited. Registered office: Network House,
> Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
> number: 07667393
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDUFMYADHI4KCXMXO72VZKPW6D5K7ZOR/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AQHFEW6XT7IHXA2BWCHOIJ6P4HA25IBK/


[ovirt-users] Re: HE deployment failing

2019-07-05 Thread Simone Tiraboschi
On Fri, Jul 5, 2019 at 4:12 PM Parth Dhanjal  wrote:

> Hey!
>
> I'm trying to deploy a 3 node cluster with gluster storage.
> After the gluster deployment is completed successfully, the creation of
> storage domain fails during HE deployment giving the error:
>
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error:
> the target storage domain contains only 46.0GiB of available space while a
> minimum of 61.0GiB is required If you wish to use the current target
> storage domain by extending it, make sure it contains nothing before adding
> it."}
> I have tried to increase the disk size(provided in storage tab) to 90GiB.
> But the deployment still fails. A 50GiB storage domain is created by
> default even if some other size is provided.
>

The issue is on the size of the gluster volume not on the size of the
hosted-engine VM disk.
Please try extending the gluater volume.



>
> Has anyone faced a similar issue?
>
> Regards
> Parth Dhanjal
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W2CM74EU6KPPJ2NL3HXBTYPHDO7BMZB6/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2MFPZNCLNLXOT3YNB4S5XKP546GDHBHY/


[ovirt-users] Re: 4.3 won't upgrade engine in standalone KVM VM: low_custom_compatibility_version 'agent03'

2019-07-05 Thread Simone Tiraboschi
On Fri, Jul 5, 2019 at 3:15 PM Simone Tiraboschi 
wrote:

>
>
> On Fri, Jul 5, 2019 at 3:06 PM Richard Chan 
> wrote:
>
>> In libvirt/virsh  the VM is called "ovirt7"; here is the domain
>> definition. This VM has been upgraded from
>> 3.6->4.0->4.1->4.2. This is the first time engine-setup has complained
>> about the "level" of the VM.
>>
>> libvirt domain definition:
>>
>
> oVirt VMs contains something like:
>
>   http://ovirt.org/vm/tune/1.0; xmlns:ovirt-vm="
> http://ovirt.org/vm/1.0;>
> 
> http://ovirt.org/vm/1.0;>
> 4.2
>
> in the XML for libvirt so you are checking the wrong VM.
>

Please try executing this on your engine VM:

sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c "select *
from vms where vm_name='agent03'"


>
>
>>
>> 
>>   ovirt7
>>   cdb0f3fd-42c7-4a16-bcd1-5d65f19aca19
>>   8388608
>>   8388608
>>   2
>>   
>> /machine
>>   
>>   
>> hvm
>> > type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
>> /var/lib/libvirt/qemu/nvram/ovirt7_VARS.fd
>> 
>>   
>>   
>> 
>> 
>> 
>>   
>>   
>> Haswell-noTSX
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>   
>>   
>> 
>> 
>> 
>>   
>>   destroy
>>   restart
>>   destroy
>>   
>> 
>> 
>>   
>>   
>> /usr/libexec/qemu-kvm
>> 
>>   
>>   
>>   
>> 
>> 
>> 
>>   
>>   
>>   
>>   
>> 
>> 
>>   
>>   
>>   
>> 
>> > file='/var/lib/libvirt/images/ovirt7/database_01.WAEqcow2'/>
>> 
>>   
>>   
>>   
>>   
>> 
>> 
>>   
>>   > function='0x7'/>
>> 
>> 
>>   
>>   
>>   > function='0x0' multifunction='on'/>
>> 
>> 
>>   
>>   
>>   > function='0x1'/>
>> 
>> 
>>   
>>   
>>   > function='0x2'/>
>> 
>> 
>>   
>>   > function='0x0'/>
>> 
>> 
>>   
>> 
>> 
>>   
>>   > function='0x0'/>
>> 
>> 
>>   
>>   
>>   
>>   
>>   
>>   > function='0x0'/>
>> 
>> 
>>   
>>   
>>   
>> 
>> 
>>   
>>   
>>   
>> 
>> 
>>   > path='/var/lib/libvirt/qemu/channel/target/domain-15-ovirt7/org.qemu.guest_agent.0'/>
>>   > state='connected'/>
>>   
>>   
>> 
>> 
>>   > state='disconnected'/>
>>   
>>   
>> 
>> 
>>   
>>   
>> 
>> 
>>   
>> 
>> 
>>   
>> 
>> 
>>   
>> 
>> 
>>   
>>   > function='0x0'/>
>> 
>> 
>>   > primary='yes'/>
>>   
>>   > function='0x0'/>
>> 
>> 
>>   
>>   
>> 
>> 
>>   
>>   
>> 
>> 
>>   
>>   > function='0x0'/>
>> 
>>   
>>   
>> system_u:system_r:svirt_t:s0:c194,c775
>> system_u:object_r:svirt_image_t:s0:c194,c775
>>   
>>   
>> +107:+107
>> +107:+107
>>   
>> 
>>
>> On Fri, Jul 5, 2019 at 8:53 PM Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Fri, Jul 5, 2019 at 2:48 PM Richard Chan <
>>> rich...@treeboxsolutions.com> wrote:
>>>
>>>> Hi,
>>>>
>>>>  I am running 4.2 engine in a standalone libvirt KVM VM (running on a
>>>> hypervisor that is not part of the oVirt infrastructure); i.e., I am not
>>>> using hosted engine but still running engine in a VM.
>>>>
>>>> Now 4.3 won't upgrade inside this libvirt KVM VM:
>>>>
>>>> Failed to execute stage 'Setup validation': Cannot upgrade the Engine
>>>> due

[ovirt-users] Re: 4.3 won't upgrade engine in standalone KVM VM: low_custom_compatibility_version 'agent03'

2019-07-05 Thread Simone Tiraboschi
On Fri, Jul 5, 2019 at 3:06 PM Richard Chan 
wrote:

> In libvirt/virsh  the VM is called "ovirt7"; here is the domain
> definition. This VM has been upgraded from
> 3.6->4.0->4.1->4.2. This is the first time engine-setup has complained
> about the "level" of the VM.
>
> libvirt domain definition:
>

oVirt VMs contains something like:

  http://ovirt.org/vm/tune/1.0; xmlns:ovirt-vm="
http://ovirt.org/vm/1.0;>

http://ovirt.org/vm/1.0;>
4.2

in the XML for libvirt so you are checking the wrong VM.


>
> 
>   ovirt7
>   cdb0f3fd-42c7-4a16-bcd1-5d65f19aca19
>   8388608
>   8388608
>   2
>   
> /machine
>   
>   
> hvm
>  type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd
> /var/lib/libvirt/qemu/nvram/ovirt7_VARS.fd
> 
>   
>   
> 
> 
> 
>   
>   
> Haswell-noTSX
> 
> 
> 
> 
> 
> 
> 
>   
>   
> 
> 
> 
>   
>   destroy
>   restart
>   destroy
>   
> 
> 
>   
>   
> /usr/libexec/qemu-kvm
> 
>   
>   
>   
> 
> 
> 
>   
>   
>   
>   
> 
> 
>   
>   
>   
> 
>  file='/var/lib/libvirt/images/ovirt7/database_01.WAEqcow2'/>
> 
>   
>   
>   
>   
> 
> 
>   
>function='0x7'/>
> 
> 
>   
>   
>function='0x0' multifunction='on'/>
> 
> 
>   
>   
>function='0x1'/>
> 
> 
>   
>   
>function='0x2'/>
> 
> 
>   
>function='0x0'/>
> 
> 
>   
> 
> 
>   
>function='0x0'/>
> 
> 
>   
>   
>   
>   
>   
>function='0x0'/>
> 
> 
>   
>   
>   
> 
> 
>   
>   
>   
> 
> 
>path='/var/lib/libvirt/qemu/channel/target/domain-15-ovirt7/org.qemu.guest_agent.0'/>
>state='connected'/>
>   
>   
> 
> 
>state='disconnected'/>
>   
>   
> 
> 
>   
>   
> 
> 
>   
> 
> 
>   
> 
> 
>   
> 
> 
>   
>function='0x0'/>
> 
> 
>primary='yes'/>
>   
>function='0x0'/>
> 
> 
>   
>   
> 
> 
>   
>   
> 
> 
>   
>function='0x0'/>
> 
>   
>   
> system_u:system_r:svirt_t:s0:c194,c775
> system_u:object_r:svirt_image_t:s0:c194,c775
>   
>   
> +107:+107
> +107:+107
>   
> 
>
> On Fri, Jul 5, 2019 at 8:53 PM Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Fri, Jul 5, 2019 at 2:48 PM Richard Chan 
>> wrote:
>>
>>> Hi,
>>>
>>>  I am running 4.2 engine in a standalone libvirt KVM VM (running on a
>>> hypervisor that is not part of the oVirt infrastructure); i.e., I am not
>>> using hosted engine but still running engine in a VM.
>>>
>>> Now 4.3 won't upgrade inside this libvirt KVM VM:
>>>
>>> Failed to execute stage 'Setup validation': Cannot upgrade the Engine
>>> due to low custom_compatibility_version for virtual
>>>  machines: ['agent03']. Please edit this virtual machines, in edit VM
>>> dialog go to System->Advanced Parameters -> Custom Compatibility Version
>>> and either reset to empty (cluster default) or set a value supported by the
>>> new installation: 4.1, 4.2, 4.3.
>>>
>>> engine-setup has probably detected that it is running inside a VM:
>>> 1. Where did it get 'agent03' string from
>>> 2. Any suggestions how to fool engine-setup to think this is 4.3
>>>
>>
>> The engine is not going to check anything on hosts that are unknown at
>> engine eyes.
>> I think that you simply have in the engine a VM called 'agent03'.
>> Please check it.
>>
>>
>>>
>>> Here is the dmidecode of the standalone engine VM:
>>>
>>> # dmidecode 3.1
>>> Getting SMBIOS data from sysfs.
>>> SMBIOS 2.8 present.
>>> 11 structures occupying 591 bytes.
>>> Table at 0xBFED7000.
>>>
>>> Handle 0x0100, DMI type 1, 27 bytes
>>> System Information
>&

[ovirt-users] Re: 4.3 won't upgrade engine in standalone KVM VM: low_custom_compatibility_version 'agent03'

2019-07-05 Thread Simone Tiraboschi
gt; System Boot Information
> Status: No errors detected
>
> Handle 0x, DMI type 0, 24 bytes
> BIOS Information
> Vendor: EFI Development Kit II / OVMF
> Version: 0.0.0
> Release Date: 02/06/2015
> Address: 0xE8000
> Runtime Size: 96 kB
> ROM Size: 64 kB
> Characteristics:
> BIOS characteristics not supported
> Targeted content distribution is supported
> UEFI is supported
> System is a virtual machine
> BIOS Revision: 0.0
>
> Handle 0xFEFF, DMI type 127, 4 bytes
> End Of Table
>
> Cheers
>
> --
> Richard Chan
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VW6LVDLBNMZBXO5AW5CYG2BHACZSMPLR/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JJZV3EGUUYT7UAONE7H6JV36KXNUYPFT/


[ovirt-users] Re: Hosted Engine Deploy - Error at the end... [NFS ?]

2019-07-03 Thread Simone Tiraboschi
On Wed, Jul 3, 2019 at 11:52 AM  wrote:

> Hi, thanks for that
>
> All looks fine.
>
> What else can be an issue here ? Do I have another way for installing the
> Mgmt Engine ? [or other solution for my NFS issue during H"E Deployment]
>

Can you please try to directly mount that NFS share on your host and ensure
that you can write there as VDSM user?
You can try something like:
  sudo -u vdsm mkdir test
but after that please clean up the directory.


>
> Thanks Team
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OOOLJLS43AH3XWEWNTN6U2LVDLIDQHPF/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3Z6HHHUGQ4Q6ALNDC5P4EI25VOGLMJO7/


[ovirt-users] Re: Hosted Engine Deploy - Error at the end... [NFS ?]

2019-07-03 Thread Simone Tiraboschi
On Wed, Jul 3, 2019 at 11:06 AM  wrote:

> Hi,
> While Trying to Deploy Hosted Engine, I'm stuck on Stage 4. Here is the
> error :
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Fetch Datacenter name]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.hosted_engine_setup : Add NFS storage domain]
> [ ERROR ] Verify permission settings on the specified storage path.]".
> HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[Permission settings on
> the specified path do not allow access to the storage.\nVerify permission
> settings on the specified storage path.]\". HTTP response code is 400."}
>
> But, it's very strange, because, when I'm trying to mount this NFS share
> manually, It's working
> So what am I doing wrong here ?
>

Please check this:
https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html


>
> Here is the "Storage Connection" - 8.45.119.16:/ovirt_engine
> Mount option is blanked
> NFS Version is "v3"
>
> Thanks in advance !
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4DPEX6M5PH7Y35EVCDKBOOFQTJYFUTKK/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2V3MRADKMAR6EBTZSU3UUM6ZL27KXAJQ/


[ovirt-users] Re: Failed on HE-Storage migration

2019-07-02 Thread Simone Tiraboschi
on old_engine and
> > restore_file.
> >
> > Any ideas?
> >
> > regards,
> > Chris
> >
> > ___
> > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
> > To unsubscribe send an email to users-le...@ovirt.org
> > <mailto:users-le...@ovirt.org>
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BUCERUSB4PUTOTH6RHAAETZNO4WSWSFF/
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CACNJX6KOR44IAOKHZOJ7EDCAZ2BZM4/
> >
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XRRM2NMJREU7G57B4UJGMSRI3DRJPSSL/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHIY7RE5BFMJAIDVHY6GJPAY2G5YKA4L/


[ovirt-users] Re: Failed on HE-Storage migration

2019-07-02 Thread Simone Tiraboschi
On Mon, Jul 1, 2019 at 6:06 PM Strahil Nikolov 
wrote:

> Do you really use Cinderlib ?
>
> If so , I think that someone of the DEv should take a look at:
> [ ERROR ] Failed to
> execute stage 'Environment setup': Cannot connect to ovirt cinderlib
> database using existing credentials: ovirt_cinderlib@localhost:5432
>

Yes, there is an open bug tracked at https://bugzilla.redhat.com/1707225
<https://bugzilla.redhat.com/show_bug.cgi?id=1707225> about handling
cinderlib integration in the backup.
It's targeted for 4.3.6.

Honestly I don't see any workaround other than trying to manually backup
and restore the cinderlib DB.



>
> Best Regards,
> Strahil Nikolov
>
> В понеделник, 1 юли 2019 г., 9:42:18 ч. Гринуич-4, Christoph Köhler <
> koeh...@luis.uni-hannover.de> написа:
>
>
> Hello,
>
> we tried to migrate the hosted engine to a new storage but we ran into
> an error - in any attempt the same.
>
> What we did is:
>
> ° Version 4.3.4.3-1.el7
> ° in the engine vm: systemctl stop  ovirt-engine
> ° took backup with scope=all
> ° hosted-engine --set-maintenance --mode=global
> ° hosted-engine --vm-shutdown
> ° on a new host (not in the inventory of engineDB): hosted-engine
> ---deploy --restore-from-file=backup-file
>
> It ran fine until:
>
> //
> ERROR ] fatal: [localhost -> newhost]: FAILED! => {"changed": true,
> "cmd": ["engine-setup", "--accept-defaults",
> "--config-append=/root/ovirt-engine-answers", "--offline"], "delta":
> "0:00:00.801074", "end": "2019-06-27 12:13:49.157140", "msg": "non-zero
> return code", "rc": 1, "start": "2019-06-27 12:13:48.356066", "stderr":
> "", "stderr_lines": [], "stdout": "[ INFO  ] Stage: Initializing\n[ INFO
>   ] Stage: Environment setup\n  Configuration files:
> ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf',
> '/root/ovirt-engine-answers']\n  Log file:
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190627121348-48b6rw.log\n
>   Version: otopi-1.8.2 (otopi-1.8.2-1.el7)\n[ ERROR ] Failed to
> execute stage 'Environment setup': Cannot connect to ovirt cinderlib
> database using existing credentials: ovirt_cinderlib@localhost:5432\n[
> INFO  ] Stage: Clean up\n  Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190627121348-48b6rw.log\n[
>
> INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20190627121349-setup.conf'\n[ INFO
> ] Stage: Pre-termination\n[ INFO  ] Stage: Termination\n[ ERROR ]
> Execution of setup failed", "stdout_lines": ["[ INFO  ] Stage:
> Initializing", "[ INFO  ] Stage: Environment setup", "
> Configuration files:
> ['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf',
> '/root/ovirt-engine-answers']", "  Log file:
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190627121348-48b6rw.log",
> "  Version: otopi-1.8.2 (otopi-1.8.2-1.el7)", "[ ERROR ] Failed
> to execute stage 'Environment setup': Cannot connect to ovirt cinderlib
> database using existing credentials: ovirt_cinderlib@localhost:5432", "[
> INFO  ] Stage: Clean up", "  Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190627121348-48b6rw.log",
> "[ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/20190627121349-setup.conf'", "[
> INFO  ] Stage: Pre-termination", "[ INFO  ] Stage: Termination", "[
> ERROR ] Execution of setup failed"]}
> //
>
> The user and password hashes are okay and the same on old_engine and
> restore_file.
>
> Any ideas?
>
> regards,
> Chris
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BUCERUSB4PUTOTH6RHAAETZNO4WSWSFF/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVir

[ovirt-users] Re: oVirt node 4.3.4 stable can't deploy hosted engine to iscsi target all available LUNs used

2019-07-02 Thread Simone Tiraboschi
On Mon, Jul 1, 2019 at 6:56 PM Mitja Pirih  wrote:

> Hi,
>
> I am new to oVirt, coming from XenServer platform, so please be patient
> with me.
>
> After a successful install of latest stable oVirt node
> (ovirt-node-ng-installer-4.3.4-2019061016.el7.iso) I am having troubles
> deploying hosted engine to an iscsi target over Cockpit. I have a couple of
> iscsi targets available on two different storage platforms (Synology DS418,
> Lenovo DE2000H). After retrieving iscsi Target list all LUNs get
> automatically "connected" by "Target list scan" and shown as used on LUNs
> list. I can reproduce this every single time. On storage system I can also
> see, that all LUNs get connected by the initial scan. As a result I an
> unable to continue the deployment as there are no LUNs available.
>
> Is this a normal behavior? What would you suggest me to do?
>

Hi,
please clean up one of that LUN before trying again.


>
> Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3P3IRR7GA7RKZZMJTBHOU7DAJU3ZT5Q/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/27TYGLMZT4IUB4AOSLKYCQHZN4LIG3QY/


[ovirt-users] Re: hosted engine not getting up

2019-07-01 Thread Simone Tiraboschi
On Mon, Jul 1, 2019 at 2:33 PM Crazy Ayansh 
wrote:

> PFA.
>

I see that we have a lot of errors trying to update protected devices on
the hosted-engine VM

2019-07-01 13:43:45,611+05 INFO
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-5)
[fe33db64-7c7a-4e5d-bdee-3e0e020df6e5] Lock Acquired to object
'EngineLock:{exclusiveLocks='[HostedEngine=VM_NAME]',
sharedLocks='[edfd300f-1ab5-44a0-ab61-22e18a528fc4=VM]'}'
2019-07-01 13:43:45,618+05 WARN
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-5)
[fe33db64-7c7a-4e5d-bdee-3e0e020df6e5] Validation of action 'UpdateVm'
failed for user admin@internal-authz. Reasons:
VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD
2019-07-01 13:43:45,618+05 INFO
 [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-5)
[fe33db64-7c7a-4e5d-bdee-3e0e020df6e5] Lock freed to object
'EngineLock:{exclusiveLocks='[HostedEngine=VM_NAME]',
sharedLocks='[edfd300f-1ab5-44a0-ab61-22e18a528fc4=VM]'}'
2019-07-01 13:44:34,079+05 INFO
 [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-5) [1addd960]
Running command: SetVmTicketCommand internal: false. Entities affected :
 ID: edfd300f-1ab5-44a0-ab61-22e18a528fc4 Type: VMAction group
CONNECT_TO_VM with role type USER
2019-07-01 13:44:34,092+05 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default
task-5) [1addd960] START, SetVmTicketVDSCommand(HostName =
iondelsvr49.iontrading.com,
SetVmTicketVDSCommandParameters:{hostId='4b56f069-d9d8-42d9-a0d8-2da529c9e0b7',
vmId='edfd300f-1ab5-44a0-ab61-22e18a528fc4', protocol='VNC',
ticket='8CSP39ykMO4D', validTime='120', userName='admin',
userId='5ab52ac1-01ba-020f-00da-0331',
disconnectAction='LOCK_SCREEN'}), log id: 3a7bb24d

but I don't see any specific error trying to generate the OVF_STORE volume.


>
> On Mon, Jul 1, 2019 at 5:24 PM Simone Tiraboschi 
> wrote:
>
>> The hosted-engine VM configuration is wrote by the engine on a special
>> volume called OVF_STORE.
>> According to the logs ovirt-ha-agent correctly extracted it:
>>
>> MainThread::INFO::2019-07-01
>> 07:39:06,480::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
>> Trying to get a fresher copy of vm configuration from the OVF_STORE
>> MainThread::INFO::2019-07-01
>> 07:39:06,481::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> Extracting Engine VM OVF from the OVF_STORE
>> MainThread::INFO::2019-07-01
>> 07:39:06,489::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
>> OVF_STORE volume path:
>> /var/run/vdsm/storage/d1153bec-a29f-4196-bef2-f7c8d88d4e31/4f5084e0-10f4-4532-a747-18d568ef8d40/f02ef729-5d33-4c50-9f3c-29b6cd5a4f81
>>
>> MainThread::INFO::2019-07-01
>> 07:39:06,506::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
>> Found an OVF for HE VM, trying to convert
>> MainThread::INFO::2019-07-01
>> 07:39:06,509::config::440::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
>> Got vm.conf from OVF_STORE
>>
>> Can you please check engine.log on your engine VM for errors there
>> generating the OVF_STORE?
>>
>> On Mon, Jul 1, 2019 at 1:46 PM Crazy Ayansh 
>> wrote:
>>
>>> Hi Simone,
>>>
>>> Few minutes ago i have done below changes.
>>>
>>> I edited /var/run/ovirt-hosted-engine-ha/vm.conf where memory was very
>>> low so i increased it upto 16 gb and hence hosted engine is up but i am not
>>> sure why it was in that state.
>>> I have attached agen.log of the host server (hosted engine is also
>>> running on this server)
>>>
>>> Thanks
>>> Shashank
>>>
>>>
>>>
>>> On Mon, Jul 1, 2019 at 4:49 PM Simone Tiraboschi 
>>> wrote:
>>>
>>>> Can you please attach agent.log from one of your hosts?
>>>>
>>>> On Mon, Jul 1, 2019 at 9:12 AM Crazy Ayansh <
>>>> shashank123rast...@gmail.com> wrote:
>>>>
>>>>> Hi Simon, It seems to be a memory issue but why it's not showing
>>>>> correct memory there i mean in hosted engine i have given below memory :
>>>>> [image: image.png]
>>>>>
>>>>> whereas it is showing on virsh -r *** command is bit lesser why ?
>>>>> [image: image.png]
>>>>>
>>>>> why both are different ?
>>>>>
>>>>> On Mon, Jul 1, 2019 at 12:31 PM Crazy Ayansh <
>>>>> shashank123rast...@gmail.com> 

[ovirt-users] Re: hosted engine not getting up

2019-07-01 Thread Simone Tiraboschi
The hosted-engine VM configuration is wrote by the engine on a special
volume called OVF_STORE.
According to the logs ovirt-ha-agent correctly extracted it:

MainThread::INFO::2019-07-01
07:39:06,480::config::416::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2019-07-01
07:39:06,481::ovf_store::132::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2019-07-01
07:39:06,489::ovf_store::134::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/var/run/vdsm/storage/d1153bec-a29f-4196-bef2-f7c8d88d4e31/4f5084e0-10f4-4532-a747-18d568ef8d40/f02ef729-5d33-4c50-9f3c-29b6cd5a4f81

MainThread::INFO::2019-07-01
07:39:06,506::config::435::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Found an OVF for HE VM, trying to convert
MainThread::INFO::2019-07-01
07:39:06,509::config::440::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Got vm.conf from OVF_STORE

Can you please check engine.log on your engine VM for errors there
generating the OVF_STORE?

On Mon, Jul 1, 2019 at 1:46 PM Crazy Ayansh 
wrote:

> Hi Simone,
>
> Few minutes ago i have done below changes.
>
> I edited /var/run/ovirt-hosted-engine-ha/vm.conf where memory was very low
> so i increased it upto 16 gb and hence hosted engine is up but i am not
> sure why it was in that state.
> I have attached agen.log of the host server (hosted engine is also running
> on this server)
>
> Thanks
> Shashank
>
>
>
> On Mon, Jul 1, 2019 at 4:49 PM Simone Tiraboschi 
> wrote:
>
>> Can you please attach agent.log from one of your hosts?
>>
>> On Mon, Jul 1, 2019 at 9:12 AM Crazy Ayansh 
>> wrote:
>>
>>> Hi Simon, It seems to be a memory issue but why it's not showing correct
>>> memory there i mean in hosted engine i have given below memory :
>>> [image: image.png]
>>>
>>> whereas it is showing on virsh -r *** command is bit lesser why ?
>>> [image: image.png]
>>>
>>> why both are different ?
>>>
>>> On Mon, Jul 1, 2019 at 12:31 PM Crazy Ayansh <
>>> shashank123rast...@gmail.com> wrote:
>>>
>>>> Hi Simon, It seems to be a memory issue but why it's not showing
>>>> correct memory there i mean in hosted engine i have given below memory :
>>>> [image: image.png]
>>>>
>>>> whereas it is showing on virsh -r *** command is bit lesser why ?
>>>> [image: image.png]
>>>>
>>>> why both are different ?
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>> On Fri, Jun 28, 2019 at 7:11 PM Simone Tiraboschi 
>>>> wrote:
>>>>
>>>>> Can you please check how much memory it got in the output of virsh -r
>>>>> dumpxml HostedEngine ?
>>>>>
>>>>>
>>>>> On Fri, Jun 28, 2019 at 3:27 PM Crazy Ayansh <
>>>>> shashank123rast...@gmail.com> wrote:
>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> Today i rebooted my hosted engine and found it was not getting up.
>>>>>> After connecting through remote viewer i found the error "error:cannot
>>>>>> allocate kernel buffer" and i am not able to start hosted engine.any
>>>>>> suggestions ?
>>>>>> ___
>>>>>> Users mailing list -- users@ovirt.org
>>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTDRNK2DZVW4UNTRRQDLFKNX3ZLDGTPI/
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Simone Tiraboschi
>>>>>
>>>>> He / Him / His
>>>>>
>>>>> Principal Software Engineer
>>>>>
>>>>> Red Hat <https://www.redhat.com/>
>>>>>
>>>>> stira...@redhat.com
>>>>> @redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
>>>>> <https://www.faceb

[ovirt-users] Re: hosted engine not getting up

2019-07-01 Thread Simone Tiraboschi
Can you please attach agent.log from one of your hosts?

On Mon, Jul 1, 2019 at 9:12 AM Crazy Ayansh 
wrote:

> Hi Simon, It seems to be a memory issue but why it's not showing correct
> memory there i mean in hosted engine i have given below memory :
> [image: image.png]
>
> whereas it is showing on virsh -r *** command is bit lesser why ?
> [image: image.png]
>
> why both are different ?
>
> On Mon, Jul 1, 2019 at 12:31 PM Crazy Ayansh 
> wrote:
>
>> Hi Simon, It seems to be a memory issue but why it's not showing correct
>> memory there i mean in hosted engine i have given below memory :
>> [image: image.png]
>>
>> whereas it is showing on virsh -r *** command is bit lesser why ?
>> [image: image.png]
>>
>> why both are different ?
>>
>> Thanks
>>
>>
>>
>> On Fri, Jun 28, 2019 at 7:11 PM Simone Tiraboschi 
>> wrote:
>>
>>> Can you please check how much memory it got in the output of virsh -r
>>> dumpxml HostedEngine ?
>>>
>>>
>>> On Fri, Jun 28, 2019 at 3:27 PM Crazy Ayansh <
>>> shashank123rast...@gmail.com> wrote:
>>>
>>>> Hi Team,
>>>>
>>>> Today i rebooted my hosted engine and found it was not getting up.
>>>> After connecting through remote viewer i found the error "error:cannot
>>>> allocate kernel buffer" and i am not able to start hosted engine.any
>>>> suggestions ?
>>>> ___
>>>> Users mailing list -- users@ovirt.org
>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTDRNK2DZVW4UNTRRQDLFKNX3ZLDGTPI/
>>>>
>>>
>>>
>>> --
>>>
>>> Simone Tiraboschi
>>>
>>> He / Him / His
>>>
>>> Principal Software Engineer
>>>
>>> Red Hat <https://www.redhat.com/>
>>>
>>> stira...@redhat.com
>>> @redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
>>> <https://www.facebook.com/redhatjobs> @redhatjobs
>>> <https://instagram.com/redhatjobs>
>>> <https://red.ht/sig>
>>> <https://redhat.com/summit>
>>>
>>

-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L55IIMZ6KMQVTHWOEJ4UNZ6OVBZ2OY5G/


[ovirt-users] Re: ovirt-engine

2019-06-28 Thread Simone Tiraboschi
On Fri, Jun 28, 2019 at 3:02 PM  wrote:

> Hi, I am installing an ovirt 4.2 self hosted engine on gluster
> hyperconverged (3 centos nodes)
> When, after the gluster configuration, I start the deploy  the
> hosted-engine tool return
>
>
> [ ERROR ] ERROR! 'delegate_to' is not a valid attribute for a TaskInclude
>
> [ ERROR ]
>
> [ ERROR ] The error appears to be in
> '/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.yml': line
> 308, column 11, but may
>
> [ ERROR ] be elsewhere in the file depending on the exact syntax problem.
>
> [ ERROR ]
>
> [ ERROR ] The offending line appears to be:
>
> [ ERROR ]
>
> [ ERROR ] LOCAL_VM_DIR={{
> hostvars['localhost']['LOCAL_VM_DIR'] }}
>
> [ ERROR ] - name: Clean bootstrap VM
>
> [ ERROR ]   ^ here
>
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
>
>
> Someone have the same issue?
>

It happens trying to deploy oVirt 4.2 with ansible 2.8.
We fixed it for oVirt 4.3 which is the stable release (4.3.4 as for today)
but never backported it to the older oVirt 4.2.
I'd strongly suggest to start with oVirt 4.3.z, if you really need oVirt
4.2 you can try downgrading ansible to 2.7.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/L75W3SWFMSTOLKYNIWAJNTO3AUV7YYS3/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AZPD2H5HVULXQXIYXNXGZT32KJQ7FUR4/


[ovirt-users] Re: hosted engine not getting up

2019-06-28 Thread Simone Tiraboschi
Can you please check how much memory it got in the output of virsh -r
dumpxml HostedEngine ?


On Fri, Jun 28, 2019 at 3:27 PM Crazy Ayansh 
wrote:

> Hi Team,
>
> Today i rebooted my hosted engine and found it was not getting up. After
> connecting through remote viewer i found the error "error:cannot allocate
> kernel buffer" and i am not able to start hosted engine.any suggestions ?
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MTDRNK2DZVW4UNTRRQDLFKNX3ZLDGTPI/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5AEEXJFNL7HVQPGUP5NUNWZSK2TAM3VZ/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-26 Thread Simone Tiraboschi
On Wed, Jun 26, 2019 at 10:58 AM Simone Tiraboschi 
wrote:

> You issue is here:
> 2019-06-20 11:25:53,200+02 WARN
>  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
> (EE-ManagedThreadFactory-engine-Thread-1) [2598128e] Validation of action
> 'HostSetupNetworks' failed for user admin@internal-authz. Reasons:
> VAR__ACTION__SETUP,VAR__TYPE__NETWORKS,INVALID_BOND_MODE_FOR_BOND_WITH_VM_NETWORK,$BondName
> bond0,$networkName ovirtmgmt
>
> Please use a valid bond mode for the bond selected on the management
> network (I'll try to understand why the setup tool didn't detected it
> before).
>

Ok, the issue is here:
2019-06-17 13:50:04,014+0200 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND Please indicate a nic to
set ovirtmgmt bridge on: (team0, team0.13, team0.19) [team0.13]:
2019-06-17 13:50:14,978+0200 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:RECEIVEteam0.19

oVirt doesn't support at all the teamed devices but just bonds, please see:
https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks.html#bonding-modes

Unfortunately ansible facts module as for
https://github.com/ansible/ansible/issues/43129 also fails discriminating a
teamed interface from a plain one and so you were able to select it.


>
> On Wed, Jun 26, 2019 at 10:46 AM  wrote:
>
>> Yes, I am using the same two interface in bond configuration with one
>> vlan for the storage and annoter one for the mgmt.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/56UBS4RALC2QQ6SVYSEKLSYV5NOFN5XO/
>>
>
>
> --
>
> Simone Tiraboschi
>
> He / Him / His
>
> Principal Software Engineer
>
> Red Hat <https://www.redhat.com/>
>
> stira...@redhat.com
> @redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
> <https://www.facebook.com/redhatjobs> @redhatjobs
> <https://instagram.com/redhatjobs>
> <https://red.ht/sig>
> <https://redhat.com/summit>
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MDRZWDUVFRLCMKX2CZU7RW3CBFUAE3JQ/


[ovirt-users] Re: Ovirt Hyperconverged Cluster

2019-06-26 Thread Simone Tiraboschi
You issue is here:
2019-06-20 11:25:53,200+02 WARN
 [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [2598128e] Validation of action
'HostSetupNetworks' failed for user admin@internal-authz. Reasons:
VAR__ACTION__SETUP,VAR__TYPE__NETWORKS,INVALID_BOND_MODE_FOR_BOND_WITH_VM_NETWORK,$BondName
bond0,$networkName ovirtmgmt

Please use a valid bond mode for the bond selected on the management
network (I'll try to understand why the setup tool didn't detected it
before).

On Wed, Jun 26, 2019 at 10:46 AM  wrote:

> Yes, I am using the same two interface in bond configuration with one vlan
> for the storage and annoter one for the mgmt.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/56UBS4RALC2QQ6SVYSEKLSYV5NOFN5XO/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWQI2YNGLGPMGOM7ULGOH362KR63NFMP/


[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Simone Tiraboschi
On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter  wrote:

> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but after shutting down the VM from this point, starting it up
> again will cause it to fail. At this point i have to go in and change
> the permissions back to vdsm:kvm on the images, and the VM will boot
> again.
>

We had an old bug about that:
https://bugzilla.redhat.com/show_bug.cgi?id=1666795
but it's reported as fixed.

Can you please detail the exact version of ovirt-engine and vdsm you are
using on all of your hosts?


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4T47UL2TDVGO3UEGQKZAHRAD5IOFTVDC/


[ovirt-users] Re: Ovirt hiperconverged setup error

2019-06-12 Thread Simone Tiraboschi
On Wed, Jun 12, 2019 at 10:00 AM PS Kazi  wrote:

> ovirt Node version 4.3.3.1
> I am trying to configure 3 node Gluster storage and oVirt hosted engine
> but gettng following error:
>
> TASK [gluster.features/roles/gluster_hci : Check if valid FQDN is
> provided] 
> failed: [ov-node-2 -> localhost] (item=ov-node-2) => {"changed": true,
> "cmd": ["dig", "ov-node-2", "+short"], "delta": "0:00:00.041003", "end":
> "2019-06-12 12:52:34.158688", "failed_when_result": true, "item":
> "ov-node-2", "rc": 0, "start": "2019-06-12 12:52:34.117685", "stderr": "",
> "stderr_lines": [], "stdout": "", "stdout_lines": []}
> failed: [ov-node-2 -> localhost] (item=ov-node-3) => {"changed": true,
> "cmd": ["dig", "ov-node-3", "+short"], "delta": "0:00:00.038688", "end":
> "2019-06-12 12:52:34.459176", "failed_when_result": true, "item":
> "ov-node-3", "rc": 0, "start": "2019-06-12 12:52:34.420488", "stderr": "",
> "stderr_lines": [], "stdout": "", "stdout_lines": []}
> failed: [ov-node-2 -> localhost] (item=ov-node-1) => {"changed": true,
> "cmd": ["dig", "ov-node-1", "+short"], "delta": "0:00:00.047938", "end":
> "2019-06-12 12:52:34.768149", "failed_when_result": true, "item":
> "ov-node-1", "rc": 0, "start": "2019-06-12 12:52:34.720211", "stderr": "",
> "stderr_lines": [], "stdout": "", "stdout_lines": []}
>
>
> Please help
>

Hi,
it's this one: https://bugzilla.redhat.com/1692671
<https://bugzilla.redhat.com/show_bug.cgi?id=1692671>

adding gluster_features_fqdn_check: false to your inventory should be
enough to avoid it.
it's currently fixed as for:
https://github.com/gluster/gluster-ansible-features/pull/24


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BXMOTKHGI5TNP5OYWVGINBVUYNVFOGDO/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMRAF7UTQQNWUWBQL4OEVRTUDFNNNA5P/


[ovirt-users] Re: RFE: HostedEngine to use boom by default

2019-06-12 Thread Simone Tiraboschi
On Tue, Jun 11, 2019 at 11:44 PM Strahil Nikolov 
wrote:

> Hello All,
>
> I have seen a lot of cases where the HostedEngine gets corrupted/broken
> and beyond repair.
>
> I think that BOOM is a good option for our HostedEngine appliances due to
> the fact that it supports booting from LVM snapshots and thus being able to
> easily recover after upgrades or other outstanding situations.
>
> Sadly, BOOM has 1 drawback - that everything should be under a single
> snapshot - thus no separation of /var /log or /audit.
>
> Do you think that changing the appliance layout is worth it ?
>

That idea is going to work at LVM level inside the VM, but at the end the
hosted-engine VM is a VM so potentially taking a snapshot at VM level is a
better option.
Currently this is not working because the hosted-engine VM disk is
protected for split brains by a volume lease (VM leases weren't available
when we started hosted-engine) and this inhibits snapshots and so live
storage migration and so on.
We already have an open RFE  to implement it for 4.4:
https://bugzilla.redhat.com/1670788
<https://bugzilla.redhat.com/show_bug.cgi?id=1670788>


>
> Note: I might have an unsupported layout that could cause my confusion.Is
> your layout a single root LV ?
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5OTOIAI4BXMVRFN5MCDGXNZHYB46XWLF/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VRRL7PRMLW6MQJNOP5OVICWFKQ6Q3QJD/


[ovirt-users] Re: oVirt Hosted-Engine upgrade filed

2019-06-05 Thread Simone Tiraboschi
On Wed, Jun 5, 2019 at 11:44 AM Mail SET Inc. Group  wrote:
>
> There is full log attached

Thanks,
we are trying to execute
  SELECT SUM(pg_database_size(datname)) As dbms_size FROM pg_database

but your postgres instance fails on that query.

Can you please try executing on the engine VM:
  sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c
"SELECT SUM(pg_database_size(datname)) As dbms_size FROM pg_database"

and if it fail share the output of:
  ls -lZ /var/opt/rh/rh-postgresql95/lib/pgsql/data/base/13699369/1259_fsm
(or the file where it fails on).

> 3 июня 2019 г., в 10:13, Simone Tiraboschi  написал(а):
>
> Hi
> Can you please share the whole log file?
>
> On Sun, Jun 2, 2019 at 8:43 PM  wrote:
>>
>> Hello! Get problems with upgrade oVirt Hosted-Engine from 4.2.8 to 4.3.3. 
>> After installing http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm 
>> on Engine VM i run yum update, then engine-setup and got this error:
>>
>>   --== DATABASE CONFIGURATION ==--
>>
>> [WARNING] This release requires PostgreSQL server 10.6 but the engine 
>> database is currently hosted on PostgreSQL server 9.5.14.
>> [ INFO  ] Verifying PostgreSQL SELinux file context rules
>> [ ERROR ] Failed to execute stage 'Environment customization': could not 
>> stat file "base/13699369/1259_fsm": Permission denied
>>
>> [ INFO  ] Stage: Clean up
>>   Log file is located at 
>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190602110956-iqr6bc.log
>> [ INFO  ] Generating answer file 
>> '/var/lib/ovirt-engine/setup/answers/2019060220-setup.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Execution of setup failed
>> But i don't find any solution how resolve this issue. Maybe i doing 
>> something wrong?
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYAI3CP2IJMP45P4HCCX5ES7JFG3SG7U/
>
>
>
> --
> Simone Tiraboschi
>
> He / Him / His
>
> Principal Software Engineer
>
> Red Hat
>
> stira...@redhat.com
> @redhatjobs   redhatjobs @redhatjobs
>
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat

stira...@redhat.com

@redhatjobs   redhatjobs @redhatjobs
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F6IRBTRNIXXWNCQTL5WOF3PMZRUVVMO4/


[ovirt-users] Re: Westmere CPU Family

2019-06-03 Thread Simone Tiraboschi
On Mon, Jun 3, 2019 at 4:22 PM Vrgotic, Marko 
wrote:

> Dear oVirt,
>
>
>
> For a strange reason I am convinced I have read somewhere that  with 4.3
> oVirt, the CPU family Westmere will not be supported any longer.
>
> Still, I see it in dropdown list after upgradingfrom 4.2 to 4.3, also
> after installing fresh 4.3.
>
>
>
> I also see it in oVirt documentation.
>

Please check:

https://ovirt.org/release/4.3.0/ :
BZ 1540921 [RFE] Deprecate and remove support for Conroe and Penryn CPUs

Westmere is still there.



>
>
> Can someone tell me if its actually supported and if its going to be
> “discontinued” in 4.4 maybe?
>
>
>
> Thank you.
>
>
>
> Marko Vrgotic
>
> ActiveVideo
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OF4GC2IPIKU575PAAE6DZY6JLNSN3DYX/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y7TV5APUPZRYBYYIWFNVQZGMTY2OYLBS/


[ovirt-users] Re: oVirt Hosted-Engine upgrade filed

2019-06-03 Thread Simone Tiraboschi
Hi
Can you please share the whole log file?

On Sun, Jun 2, 2019 at 8:43 PM  wrote:

> Hello! Get problems with upgrade oVirt Hosted-Engine from 4.2.8 to 4.3.3.
> After installing
> http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm on Engine VM
> i run yum update, then engine-setup and got this error:
>
>   --== DATABASE CONFIGURATION ==--
>
> [WARNING] This release requires PostgreSQL server 10.6 but the engine
> database is currently hosted on PostgreSQL server 9.5.14.
> [ INFO  ] Verifying PostgreSQL SELinux file context rules
> [ ERROR ] Failed to execute stage 'Environment customization': could not
> stat file "base/13699369/1259_fsm": Permission denied
>
> [ INFO  ] Stage: Clean up
>   Log file is located at
> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190602110956-iqr6bc.log
> [ INFO  ] Generating answer file
> '/var/lib/ovirt-engine/setup/answers/2019060220-setup.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Execution of setup failed
> But i don't find any solution how resolve this issue. Maybe i doing
> something wrong?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QYAI3CP2IJMP45P4HCCX5ES7JFG3SG7U/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOE5OSH5AGB65DAK6BWTCLWT4WRFFX5F/


[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread Simone Tiraboschi
On Fri, May 31, 2019 at 1:54 PM  wrote:

> OK, but so, what is the meaning of "Configure all virtual machines that
> need to
> failover as highly available, and ensure that the virtual machine has a
> lease on the
> target storage domain." Is it assuming that the VMs are in otrher storage
> domain (no sync)?
>

You have to configure all the VMs as HA enabling a VM lease on that.
The engine will take care to restart them.

VM leases are also wrote to the relevant storage domain and so they are
going to be in sync if you are correctly syncing the storage on the two
sites.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7C2IHYFK6WVAC3K6UVTPHRM5NXWHTA7/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ISZI7UZWO46YBKSNE5MS45DZVC4T57QD/


[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread Simone Tiraboschi
On Fri, May 31, 2019 at 11:35 AM  wrote:

> > On Fri, May 31, 2019 at 10:46 AM  wrote:
> >
> >
> >
> > You cannot create a VM lease for the hosted-engine VM because the
> > hosted-engine VM is always already protected by a volume lease.
> Sorry, I don't understand this. If the storage when is placed my manager
> down, it will startup in other storage?
>

HA mechanism for the engine VM is provided by ovirt-ha-agent service
running on all the hosted-engine configured hosts (at least a couple on
each site).
The hosted-engine configured hosts communicate via a whiteboard wrote on
the hosted-engine storage domain so, if the storage devices on the two
sites are in sync (it requires latency < 7ms), all the hosted-engine host
can also see the status on the other site and eventually take over.
The volume lease is there to enforce, at storage level, that only one host
at a time is able to run the engine VM (regardless of the site where is it
since also the lock is in sync).



> >
> >
> > Did you read
> >
> https://ovirt.org/documentation/disaster-recovery-guide/active_active_ove.
> ..
> >  ?
> Yes, and it's only say "Configure all virtual machines that need to
> failover as highly available, and ensure that the virtual machine has a
> lease on the target storage domain."
> But the manager, It's configured for HA storage by default?. I have made a
> lab with 1 host and 2 storage  domain replicated and, when I pull down the
> engine storage, It don't start up again.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMZYJB33D5KI5NHJFKCJ4SDNXFMHYST3/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZHTOAYOG4LQNPF7SC6H3W7KQPAXIGHMJ/


[ovirt-users] Re: Storage HA for manager on DR environment

2019-05-31 Thread Simone Tiraboschi
On Fri, May 31, 2019 at 10:46 AM  wrote:

> Hi,
> I'm reading this guide to provide Active-Active DR for my environment (2
> sites):
>
> https://ovirt.org/documentation/disaster-recovery-guide/active_active_overview.html
>
> I have a sef-hosted environment with a storage domains per site with
> synchronous replication. I can put all my VM with a storage lease on the
> other storage site but i can't put the lease on the manager (the option is
> disable).


You cannot create a VM lease for the hosted-engine VM because the
hosted-engine VM is always already protected by a volume lease.


> How I configure the storage HA of my manager?.
>

Did you read
https://ovirt.org/documentation/disaster-recovery-guide/active_active_overview.html#configure-a-self-hosted-engine-stretch-cluster-environment
 ?


>
> Regards,
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BYCFJQDTHDMT26GVXQNCA46MHTVPTN6Y/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/46PUSBDZHL4PLQXWHB5EGC5BOLAETVC6/


[ovirt-users] Re: oVirt Node Blocks VirtViewer/SPICE connections (Did Not Auto-Configure Firewall?)

2019-05-31 Thread Simone Tiraboschi
On Thu, May 30, 2019 at 2:56 PM Zachary Winter <
zachary.win...@witsconsult.com> wrote:

> I am unable to connect via SPICE (Windows VirtViewer) to VM's running on
> my compute node.  It appears the node did not auto-configure the firewall
> because the .vv files appear to point to the correct IP address and common
> ports.  Is there a way to re-run/re-execute the firewall auto-configuration
> now that the node has already been installed?
>

>From the Web UI, you can set the host to maintenance mode and then select
reinstall: it will also configure the firewall.
But are you really sure that the issue is on host side?


> If not, does anyone happen to have firewall-cmd commands handy that I can
> run to resolve this quickly?  Which ports need to be opened?
>
> The specs on the node are as follows:
> OS Version:
> RHEL - 7 - 6.1810.2.el7.centos
> OS Description:
> oVirt Node 4.3.3.1
> Kernel Version:
> 3.10.0 - 957.10.1.el7.x86_64
> KVM Version:
> 2.12.0 - 18.el7_6.3.1
> LIBVIRT Version:
> libvirt-4.5.0-10.el7_6.6
> VDSM Version:
> vdsm-4.30.13-1.el7
> SPICE Version:
> 0.14.0 - 6.el7_6.1
> GlusterFS Version:
> glusterfs-5.5-1.el7
> CEPH Version:
> librbd1-10.2.5-4.el7
> Open vSwitch Version:
> openvswitch-2.10.1-3.el7
> Kernel Features:
> PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
> VNC Encryption:
> Enabled
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XIP6HNQVJXNW55YBXUL273CEH2YSHOA5/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGEHUUPPYJEDS6UNVIPP4EY3HH7DE3GS/


[ovirt-users] Re: 4.3 hosted-engine setup & yum-utils RPM installation

2019-05-31 Thread Simone Tiraboschi
On Thu, May 30, 2019 at 5:22 PM Simon Coter  wrote:

> Hi,
>
>
Ciao Simon,


> is there any particular reason to get “yum-utils” (and its dependency) RPM
> installed during the hosted-engine deployment ?
> I mean, why don’t we get yum-utils RPM part of the hosted-engine image ?
> This “yum” process, executed during the deployment, could fail (or wait
> forever) if the host/engine is behind a proxy — while trying to install the
> RPMs.
>

Honestly I'm not aware of that, can you please provide more details?
where does it happen? on the host or inside the engine virtual machine?
is it going to happen before starting the engine virtual machine or during
host-deploy process when the engine is going to configure the host?


> I see two options:
>
>
>- get all the required RPMs part of the hosted-engine image
>
>
Do you mean inside ovirt-engine-appliance image?
If on host side instead, ovirt-host rpm should instead already require all
(but the ovirt-engine-appliance which is about 1 GB) the rpms needed for
the deployment.


>
>- add the option to supply a proxy for yum during the hosted-engine
>setup
>
> configuring a proxy with proxy directive in /etc/yum.conf or http_proxy at
system level is absolutely supported.


>
> Could this be a request for enhancement ?
>

hosted-engine-setup is already designed to work also in disconnected mode
assuming that all the required rpms have been installed upfront.
If it fails on that use case, and all the rpms are there, it's definitively
a bug.


> Thanks
>
> Simon
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QEFE7S35AMTABZQIXACD6ZXOWSTKJKP3/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: change he vm memoy size

2019-05-30 Thread Simone Tiraboschi
On Thu, May 30, 2019 at 3:39 PM Strahil Nikolov 
wrote:

> Hi Alexey,
>
> better open a bug for that. If the Description is updated, but after a
> reboot the engine is still using the old values - it seems that it is a bug.
>

Yes, absolutely.
Thanks


>
> Best Regards,
> Strahil Nikolov
>
> В четвъртък, 30 май 2019 г., 9:26:51 ч. Гринуич-4, Valkov, Alexey <
> valkov.ale...@knauf.ru> написа:
>
>
> Indeed, after edit HE VM settings via manager UI, ovf update triggered
> immediately (checked in /var/log/ovirt-engine/engine.log).
> I dumped HE ovf_store and untar .ovf fom it.
> And i checked that all changes i made for Descriprion, MaxMemorySizeMb,
> minGuaranteedMemoryMb
> applyed (written to ovf) and remains after reboot. It works as expected.
> But not for memory or Memory Size - this settings remained initial and not
> written to ovf.
> Well, memory hotplug works - via adding new memory devices, but
> after reboot this memory devices detached but Memory Size not increased.
>
> --
> Best regards
> Alexey
>
> Actually, you need to untar the OVF from the shared storage and check the
> configuration from the tar.
> Just keep it like that (running ) and tomorrow power down and then up the
> HostedEngine.
>
> Best Regards,
> Strahil Nikolov
> On May 30, 2019 12:06, "Valkov, Alexey"  wrote:
>
> Hello, Strahil. I've just tried with *engine-config -s
> OvfUpdateIntervalInMinutes=1 systemctl restart ovirt-engine.service*
> After that, i changed Memory Size in manager UI. And waited about 30
> minutes. Then checked memSize in /var/run/ovirt-hosted-engine-ha/vm.conf
> (which if i right understand syncronized with ovf every minute) and saw
> memSize have not been changed. And Memory Size property (in manager UI)
> also remains initial. Thus i think that ovf dont changes. I return
> OvfUpdateIntervalInMinutes=60 and will wait till tomorrow, may be the
> setting will be magically aplyed.
>
> --
> Best regards
> Alexey
>
> Hi Alexey,
> How much time did you check before rebooting.
> I have noticed ,that despite the default OVF update interval of 1 hour, it
> takes 5-6 hours for the engine to update the OVF.
>
> Best Regards,
> Strahil Nikolov
> On May 30, 2019 10:30, "Valkov, Alexey"  wrote:
>
> I try to increase memory of HE VM (oVirt 4.2.8). If i do it from manager
> UI, i see that hot plug works - new memory devices appear and corresponding
> memory increase appeares inside engine guest. But 'Memory Size' property of
> hosted engine (in manager UI) don't reflect that new amount of memory. Also
> after reboot of engine vm, memory size changes back to initial value. Is it
> possible to change memory size of HE vm ( as far as i know the settings
> stored in ovf on HE domain) and how i can make this change to be persistent.
>
> --
> Best regards
> Alexey
>
> ___
>
> Users mailing list --
>
> 
>
> users@ovirt.org
>
>
> To unsubscribe send an email to
>
> 
>
> users-le...@ovirt.org
>
>
> Privacy Statement:
>
> <https://www.ovirt.org/site/privacy-policy/>
>
> https://www.ovirt.org/site/privacy-policy
>
> /
>
>
> oVirt Code of Conduct:
>
> <https://www.ovirt.org/community/about/community-guidelines/>
>
> https://www.ovirt.org/community/about/community-guidelines
>
> /
>
>
> List Archives:
>
>
> <https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKID3B2TH3VR273KZNQB4QC66WYC4PCQ/>
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKID3B2TH3VR273KZNQB4QC66WYC4PCQ
>
> /
>
>
> _______
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKCLZCNLA2U7VXEXQFJCOTVMXBM53FA5/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/57DCCV4WPKHAT4RCM5JADK6N26TOBD72/


[ovirt-users] Re: change he vm memoy size

2019-05-30 Thread Simone Tiraboschi
On Thu, May 30, 2019 at 11:08 AM Valkov, Alexey 
wrote:

> Hello, Strahil. I've just tried with *engine-config -s
> OvfUpdateIntervalInMinutes=1 systemctl restart ovirt-engine.service*
> After that, i changed Memory Size in manager UI. And waited about 30
> minutes. Then checked memSize in /var/run/ovirt-hosted-engine-ha/vm.conf
> (which if i right understand syncronized with ovf every minute) and saw
> memSize have not been changed. And Memory Size property (in manager UI)
> also remains initial. Thus i think that ovf dont changes. I return
> OvfUpdateIntervalInMinutes=60 and will wait till tomorrow, may be the
> setting will be magically aplyed.
>

Any changes to the engine VM configuration should trigger an immediate
refresh of the OVF_STORE volumes.
You can check engine.log for that.

A possible workaround is to try editing at the same time also a different
value such as the VM description.


>
> --
> Best regards
> Alexey
>
> Hi Alexey,
> How much time did you check before rebooting.
> I have noticed ,that despite the default OVF update interval of 1 hour, it
> takes 5-6 hours for the engine to update the OVF.
>
> Best Regards,
> Strahil Nikolov
> On May 30, 2019 10:30, "Valkov, Alexey"  wrote:
>
> I try to increase memory of HE VM (oVirt 4.2.8). If i do it from manager
> UI, i see that hot plug works - new memory devices appear and corresponding
> memory increase appeares inside engine guest. But 'Memory Size' property of
> hosted engine (in manager UI) don't reflect that new amount of memory. Also
> after reboot of engine vm, memory size changes back to initial value. Is it
> possible to change memory size of HE vm ( as far as i know the settings
> stored in ovf on HE domain) and how i can make this change to be persistent.
>
> --
> Best regards
> Alexey
>
> ___
>
> Users mailing list --
>
> users@ovirt.org
>
>
> To unsubscribe send an email to
>
> users-le...@ovirt.org
>
>
> Privacy Statement:
>
> https://www.ovirt.org/site/privacy-policy/
>
>
> oVirt Code of Conduct:
>
> https://www.ovirt.org/community/about/community-guidelines/
>
>
> List Archives:
>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKID3B2TH3VR273KZNQB4QC66WYC4PCQ/
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/P54M2FNHLGXZD5ZS7JX3HQZ4GRAF3VMN/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/R3DRHEWH2AGWKG4HR4OPXAW5OTFFEGF6/


[ovirt-users] Re: oVirt 4.3.4 RC1 to RC2 - Dashboard error / VM/Host/Gluster Volumes OK

2019-05-28 Thread Simone Tiraboschi
On Tue, May 28, 2019 at 9:51 AM Maton, Brett 
wrote:

> I've just upgraded to 4.3.4 RC2 and have the same issue, logs attached.
>
>
Hi,
"2019-05-28
08:30:18|6bJmP3|7pLCGZ|9KkPVX|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
not sample data, oVirt Engine is not updating the statistics. Please check
your oVirt Engine status.|9704"


looks definitively bad although I don't see nothing that related in
engine.log.
Can you please attach also server.log?




> Regards,
> Brett
>
> On Mon, 27 May 2019 at 07:47, Sandro Bonazzola 
> wrote:
>
>>
>>
>> Il giorno dom 26 mag 2019 alle ore 12:46 Strahil Nikolov <
>> hunter86...@yahoo.com> ha scritto:
>>
>>> Hello All,
>>>
>>> Just upgraded my engine from 4.3.4 RC1 to RC2 and my Dashboard is giving
>>> an error (see attached screenshot) despite everything seem to end well:
>>> Error!Could not fetch dashboard data. Please ensure that data warehouse
>>> is properly installed and configured.
>>>
>>> I have checked and the VMs and Hosts + Gluster Volumes arep roperly
>>> detected (yet all my VMs are powered off since before RC2 upgrade).
>>>
>>> Any clues that might help you solve that before I roll back (I have a
>>> gluster snapshot on 4.3.3-7) ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>
>>
>> Looks like DWH service is not feeding data to the dashboard, can you
>> please sahre your engine and dwh logs?
>> Adding Shirly and Sharon.
>>
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA <https://www.redhat.com/>
>>
>> sbona...@redhat.com
>> <https://red.ht/sig>
>> <https://redhat.com/summit>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MMFEZWVGBBGWPCLJVKZTN5QTMG63HIBB/
>>
> ___________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7VA36JOS74V3CBV6HWRP2GZBMRSVWD4A/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DW6BVJ5SLKLSI75NIZSWOYLWEUGEOFG5/


[ovirt-users] Re: [Ovirt 4.3] Deploy engine with OVS bridge

2019-05-27 Thread Simone Tiraboschi
On Mon, May 27, 2019 at 10:29 AM  wrote:

> Hi to all!
> I'm trying to deploy a new hosted-engine (via CLI, hosted-engine
> --deploy), in a fresh installation of ovirt 4.3, with OVS bridge but there
> are no option to make the OVS choice instead of the linux bridge.
>

Hi,
the engine VM has to talk with the hosts over the management network which
is always based on a linux bridge.
In addition to that, you can create one or more additional logical networks
for your VMs over OVN.


> There are some documentation of how to do ovn installation?
>

All the OVN components are automatically installed, you have just to create
a new logical network choosing "Create on external network provider"
selecting ovirt-provider-ovn there.
You can configure the subnet panel if needed.
When you are going to create a new VM or edit an existing one, you will be
able to choose the OVN based logical network.


>
> Thank you for the support
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ST7G5OCRDKEAJYJRSM7S53NAAX5WRIZD/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3R6ANSHLQUFVRSD2G7DVDUYQE7ZDBI2Q/


[ovirt-users] Re: Can't run nested virtualization

2019-05-24 Thread Simone Tiraboschi
On Fri, May 24, 2019 at 3:48 PM  wrote:

> El 2019-05-24 14:42, Simone Tiraboschi escribió:
> > On Fri, May 24, 2019 at 3:39 PM  wrote:
> >
> >> El 2019-05-24 14:30, Simone Tiraboschi escribió:
> >>> On Fri, May 24, 2019 at 2:56 PM  wrote:
> >>>
> >>>> El 2019-05-24 13:39, Simone Tiraboschi escribió:
> >>>>> On Fri, May 24, 2019 at 2:32 PM  wrote:
> >>>>>
> >>>>>> El 2019-05-24 13:22, Simone Tiraboschi escribió:
> >>>>>>> On Fri, May 24, 2019 at 1:51 PM  wrote:
> >>>>>>>
> >>>>>>>> El 2019-05-24 12:41, nico...@devels.es escribió:
> >>>>>>>>> El 2019-05-14 08:19, Yedidyah Bar David escribió:
> >>>>>>>>>> On Tue, May 14, 2019 at 10:02 AM 
> >> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>> Please, any ideas about this?
> >>>>>>>>>>>
> >>>>>>>>>>> Thanks.
> >>>>>>>>>>>
> >>>>>>>>>>> El 2019-05-10 09:18, nico...@devels.es escribió:
> >>>>>>>>>>> > Hi,
> >>>>>>>>>>> >
> >>>>>>>>>>> > We're running oVirt version 4.3.3, and trying to
> >>>> configure
> >>>>>>>> one of the
> >>>>>>>>>>> > hosts to support Nested Virtualization, but when
> >>>> installing
> >>>>>>>> the nested
> >>>>>>>>>>> > host it claims it doesn't support hardware
> >>>> virtualization.
> >>>>>>>>>>> >
> >>>>>>>>>>> > On the physical host, we've enabled nested
> >>>> virtualization:
> >>>>>>>>>>> >
> >>>>>>>>>>> > # cat /sys/module/kvm_intel/parameters/nested
> >>>>>>>>>>> > Y
> >>>>>>>>>>> >
> >>>>>>>>>>> > Content of /etc/modprobe.d/kvm.conf:
> >>>>>>>>>>> >
> >>>>>>>>>>> > options kvm_intel nested=1
> >>>>>>>>>>> > options kvm_intel enable_shadow_vmcs=1
> >>>>>>>>>>> > options kvm_intel enable_apicv=1
> >>>>>>>>>>> > options kvm_intel ept=1
> >>>>>>>>>>> >
> >>>>>>>>>>> > I created a VM to run on that host, which will be the
> >>>>>> nested
> >>>>>>>> host. I
> >>>>>>>>>>> > try to deploy it but the engine will show it failed
> >>>>>> because:
> >>>>>>>>>>> >
> >>>>>>>>>>> > 2019-05-10 09:11:32,006+01 ERROR
> >>>>>>>>>>> >
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>>>>>>>>>> > (VdsDeploy) [6381e662] EVENT_ID:
> >>>>>>>> VDS_INSTALL_IN_PROGRESS_ERROR(511),
> >>>>>>>>>>> > An error has occurred during installation of Host
> >> host1:
> >>>>>>>> Failed to
> >>>>>>>>>>> > execute stage 'Setup validation': Hardware does not
> >>>> support
> >>>>>>>>>>> > virtualization.
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Hi Yedidyah, sorry for the delayed answer.
> >>>>>>>>>
> >>>>>>>>>> You might find some more details in the host-deploy log,
> >>>>>>>>>> which you should be able to find in
> >>>>>>>> /var/log/ovirt-engine/host-deploy
> >>>>>>>>>> (on the engine machine, it's copied there after deploy
> >>>>>>>> finishes).
> >>>>>>>>>>
> >>>>>>>>>
>

[ovirt-users] Re: Can't run nested virtualization

2019-05-24 Thread Simone Tiraboschi
On Fri, May 24, 2019 at 3:39 PM  wrote:

> El 2019-05-24 14:30, Simone Tiraboschi escribió:
> > On Fri, May 24, 2019 at 2:56 PM  wrote:
> >
> >> El 2019-05-24 13:39, Simone Tiraboschi escribió:
> >>> On Fri, May 24, 2019 at 2:32 PM  wrote:
> >>>
> >>>> El 2019-05-24 13:22, Simone Tiraboschi escribió:
> >>>>> On Fri, May 24, 2019 at 1:51 PM  wrote:
> >>>>>
> >>>>>> El 2019-05-24 12:41, nico...@devels.es escribió:
> >>>>>>> El 2019-05-14 08:19, Yedidyah Bar David escribió:
> >>>>>>>> On Tue, May 14, 2019 at 10:02 AM  wrote:
> >>>>>>>>>
> >>>>>>>>> Please, any ideas about this?
> >>>>>>>>>
> >>>>>>>>> Thanks.
> >>>>>>>>>
> >>>>>>>>> El 2019-05-10 09:18, nico...@devels.es escribió:
> >>>>>>>>> > Hi,
> >>>>>>>>> >
> >>>>>>>>> > We're running oVirt version 4.3.3, and trying to
> >> configure
> >>>>>> one of the
> >>>>>>>>> > hosts to support Nested Virtualization, but when
> >> installing
> >>>>>> the nested
> >>>>>>>>> > host it claims it doesn't support hardware
> >> virtualization.
> >>>>>>>>> >
> >>>>>>>>> > On the physical host, we've enabled nested
> >> virtualization:
> >>>>>>>>> >
> >>>>>>>>> > # cat /sys/module/kvm_intel/parameters/nested
> >>>>>>>>> > Y
> >>>>>>>>> >
> >>>>>>>>> > Content of /etc/modprobe.d/kvm.conf:
> >>>>>>>>> >
> >>>>>>>>> > options kvm_intel nested=1
> >>>>>>>>> > options kvm_intel enable_shadow_vmcs=1
> >>>>>>>>> > options kvm_intel enable_apicv=1
> >>>>>>>>> > options kvm_intel ept=1
> >>>>>>>>> >
> >>>>>>>>> > I created a VM to run on that host, which will be the
> >>>> nested
> >>>>>> host. I
> >>>>>>>>> > try to deploy it but the engine will show it failed
> >>>> because:
> >>>>>>>>> >
> >>>>>>>>> > 2019-05-10 09:11:32,006+01 ERROR
> >>>>>>>>> >
> >>>>>>
> >>>>>
> >>>>
> >>>
> >>
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>>>>>>>> > (VdsDeploy) [6381e662] EVENT_ID:
> >>>>>> VDS_INSTALL_IN_PROGRESS_ERROR(511),
> >>>>>>>>> > An error has occurred during installation of Host host1:
> >>>>>> Failed to
> >>>>>>>>> > execute stage 'Setup validation': Hardware does not
> >> support
> >>>>>>>>> > virtualization.
> >>>>>>>>
> >>>>>>>
> >>>>>>> Hi Yedidyah, sorry for the delayed answer.
> >>>>>>>
> >>>>>>>> You might find some more details in the host-deploy log,
> >>>>>>>> which you should be able to find in
> >>>>>> /var/log/ovirt-engine/host-deploy
> >>>>>>>> (on the engine machine, it's copied there after deploy
> >>>>>> finishes).
> >>>>>>>>
> >>>>>>>
> >>>>>>> I had a look at it, but nothing relevant shows up besides
> >> this
> >>>>>> line:
> >>>>>>>
> >>>>>>> 2019-05-10 09:11:32,628+0100 DEBUG otopi.context
> >>>>>>> context._executeMethod:145 method exception
> >>>>>>> Traceback (most recent call last):
> >>>>>>>File "/tmp/ovirt-qPjYkVy6Ys/pythonlib/otopi/context.py",
> >>>> line
> >>>>>> 132,
> >>>>>>> in _executeMethod
> >>>>>>>  method['method']()
> >>>>>>>File
> >>>>>>>
&

[ovirt-users] Re: Can't run nested virtualization

2019-05-24 Thread Simone Tiraboschi
On Fri, May 24, 2019 at 2:56 PM  wrote:

> El 2019-05-24 13:39, Simone Tiraboschi escribió:
> > On Fri, May 24, 2019 at 2:32 PM  wrote:
> >
> >> El 2019-05-24 13:22, Simone Tiraboschi escribió:
> >>> On Fri, May 24, 2019 at 1:51 PM  wrote:
> >>>
> >>>> El 2019-05-24 12:41, nico...@devels.es escribió:
> >>>>> El 2019-05-14 08:19, Yedidyah Bar David escribió:
> >>>>>> On Tue, May 14, 2019 at 10:02 AM  wrote:
> >>>>>>>
> >>>>>>> Please, any ideas about this?
> >>>>>>>
> >>>>>>> Thanks.
> >>>>>>>
> >>>>>>> El 2019-05-10 09:18, nico...@devels.es escribió:
> >>>>>>> > Hi,
> >>>>>>> >
> >>>>>>> > We're running oVirt version 4.3.3, and trying to configure
> >>>> one of the
> >>>>>>> > hosts to support Nested Virtualization, but when installing
> >>>> the nested
> >>>>>>> > host it claims it doesn't support hardware virtualization.
> >>>>>>> >
> >>>>>>> > On the physical host, we've enabled nested virtualization:
> >>>>>>> >
> >>>>>>> > # cat /sys/module/kvm_intel/parameters/nested
> >>>>>>> > Y
> >>>>>>> >
> >>>>>>> > Content of /etc/modprobe.d/kvm.conf:
> >>>>>>> >
> >>>>>>> > options kvm_intel nested=1
> >>>>>>> > options kvm_intel enable_shadow_vmcs=1
> >>>>>>> > options kvm_intel enable_apicv=1
> >>>>>>> > options kvm_intel ept=1
> >>>>>>> >
> >>>>>>> > I created a VM to run on that host, which will be the
> >> nested
> >>>> host. I
> >>>>>>> > try to deploy it but the engine will show it failed
> >> because:
> >>>>>>> >
> >>>>>>> > 2019-05-10 09:11:32,006+01 ERROR
> >>>>>>> >
> >>>>
> >>>
> >>
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>>>>>> > (VdsDeploy) [6381e662] EVENT_ID:
> >>>> VDS_INSTALL_IN_PROGRESS_ERROR(511),
> >>>>>>> > An error has occurred during installation of Host host1:
> >>>> Failed to
> >>>>>>> > execute stage 'Setup validation': Hardware does not support
> >>>>>>> > virtualization.
> >>>>>>
> >>>>>
> >>>>> Hi Yedidyah, sorry for the delayed answer.
> >>>>>
> >>>>>> You might find some more details in the host-deploy log,
> >>>>>> which you should be able to find in
> >>>> /var/log/ovirt-engine/host-deploy
> >>>>>> (on the engine machine, it's copied there after deploy
> >>>> finishes).
> >>>>>>
> >>>>>
> >>>>> I had a look at it, but nothing relevant shows up besides this
> >>>> line:
> >>>>>
> >>>>> 2019-05-10 09:11:32,628+0100 DEBUG otopi.context
> >>>>> context._executeMethod:145 method exception
> >>>>> Traceback (most recent call last):
> >>>>>File "/tmp/ovirt-qPjYkVy6Ys/pythonlib/otopi/context.py",
> >> line
> >>>> 132,
> >>>>> in _executeMethod
> >>>>>  method['method']()
> >>>>>File
> >>>>>
> >>>>
> >>>
> >>
> > "/tmp/ovirt-qPjYkVy6Ys/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py",
> >>>>> line 71, in _validate_virtualization
> >>>>>  _('Hardware does not support virtualization')
> >>>>> RuntimeError: Hardware does not support virtualization
> >>>>>
> >>>>>> It's been some time since I configured this myself, so I do
> >> not
> >>>>>> remember
> >>>>>> the details anymore. Did you check some guides/blog posts/etc.
> >>>> about
> >>>>>> this?
> >>>>>>
> >>>>>
> >>>>> I didn't. I just enabled nested virtualization in the host a

[ovirt-users] Re: Can't run nested virtualization

2019-05-24 Thread Simone Tiraboschi
On Fri, May 24, 2019 at 2:32 PM  wrote:

> El 2019-05-24 13:22, Simone Tiraboschi escribió:
> > On Fri, May 24, 2019 at 1:51 PM  wrote:
> >
> >> El 2019-05-24 12:41, nico...@devels.es escribió:
> >>> El 2019-05-14 08:19, Yedidyah Bar David escribió:
> >>>> On Tue, May 14, 2019 at 10:02 AM  wrote:
> >>>>>
> >>>>> Please, any ideas about this?
> >>>>>
> >>>>> Thanks.
> >>>>>
> >>>>> El 2019-05-10 09:18, nico...@devels.es escribió:
> >>>>> > Hi,
> >>>>> >
> >>>>> > We're running oVirt version 4.3.3, and trying to configure
> >> one of the
> >>>>> > hosts to support Nested Virtualization, but when installing
> >> the nested
> >>>>> > host it claims it doesn't support hardware virtualization.
> >>>>> >
> >>>>> > On the physical host, we've enabled nested virtualization:
> >>>>> >
> >>>>> > # cat /sys/module/kvm_intel/parameters/nested
> >>>>> > Y
> >>>>> >
> >>>>> > Content of /etc/modprobe.d/kvm.conf:
> >>>>> >
> >>>>> > options kvm_intel nested=1
> >>>>> > options kvm_intel enable_shadow_vmcs=1
> >>>>> > options kvm_intel enable_apicv=1
> >>>>> > options kvm_intel ept=1
> >>>>> >
> >>>>> > I created a VM to run on that host, which will be the nested
> >> host. I
> >>>>> > try to deploy it but the engine will show it failed because:
> >>>>> >
> >>>>> > 2019-05-10 09:11:32,006+01 ERROR
> >>>>> >
> >>
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >>>>> > (VdsDeploy) [6381e662] EVENT_ID:
> >> VDS_INSTALL_IN_PROGRESS_ERROR(511),
> >>>>> > An error has occurred during installation of Host host1:
> >> Failed to
> >>>>> > execute stage 'Setup validation': Hardware does not support
> >>>>> > virtualization.
> >>>>
> >>>
> >>> Hi Yedidyah, sorry for the delayed answer.
> >>>
> >>>> You might find some more details in the host-deploy log,
> >>>> which you should be able to find in
> >> /var/log/ovirt-engine/host-deploy
> >>>> (on the engine machine, it's copied there after deploy
> >> finishes).
> >>>>
> >>>
> >>> I had a look at it, but nothing relevant shows up besides this
> >> line:
> >>>
> >>> 2019-05-10 09:11:32,628+0100 DEBUG otopi.context
> >>> context._executeMethod:145 method exception
> >>> Traceback (most recent call last):
> >>>File "/tmp/ovirt-qPjYkVy6Ys/pythonlib/otopi/context.py", line
> >> 132,
> >>> in _executeMethod
> >>>  method['method']()
> >>>File
> >>>
> >>
> > "/tmp/ovirt-qPjYkVy6Ys/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py",
> >>> line 71, in _validate_virtualization
> >>>  _('Hardware does not support virtualization')
> >>> RuntimeError: Hardware does not support virtualization
> >>>
> >>>> It's been some time since I configured this myself, so I do not
> >>>> remember
> >>>> the details anymore. Did you check some guides/blog posts/etc.
> >> about
> >>>> this?
> >>>>
> >>>
> >>> I didn't. I just enabled nested virtualization in the host and
> >> tried to
> >>> deploy.
> >>>
> >>>> What type of CPU did you configure in the VM (and cluster)?
> >>>>
> >>>
> >>> In the Cluster I have the Intel Broadwell Family, and as the VM
> >> CPU I
> >>> have the default cluster CPU which is the one I just referenced.
> >> Not
> >>> sure if anything else should be done.
> >>>
> >>>> To see what checks the code does, you can read [1], although the
> >> log
> >>>> should be enough IMO.
> >>>>
> >>>> [1]
> >> /usr/lib/python2.7/site-packages/ovirt_host_deploy/hardware.py
> >>
> >> I just saw this in the log:
> >>
> >> 2019-05-24 12:44:56,000+0100 DEBUG otopi.ovir

[ovirt-users] Re: Can't run nested virtualization

2019-05-24 Thread Simone Tiraboschi
ARQTEHLMA4WR3T3P6N4N/
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> oVirt Code of Conduct:
> >>> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> >>>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7LALMJF4SQQXLFZUXV2I53YRGX7J7FR6/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4QMYFBJUMSO7XF2XID4UWY27RMKDV6C4/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6O7QPVYUHVE7BHHSOVRATSYWJA732RPF/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TWXYNMXROEECKOO3DM6ZGRHP4PSZTLSS/


[ovirt-users] Re: Migrating self-HostedEngine from NFS to iSCSI

2019-05-21 Thread Simone Tiraboschi
On Tue, May 21, 2019 at 3:21 PM Miha Verlic  wrote:

> Hello,
>
> I have a few questions regarding migration of HostedEngine. Currently I
> have a cluster of 3 oVirt 4.3.3 nodes, all three of them are capable of
> running HE and I can freely migrate HostedEngine and regular VMs between
> them. However I deployed HostedEngine storage on rather edgy NFS server
> and I would like to migrate it to iSCSI based storage with multipathing.
> Quite a few VMs are already running on cluster and are using iSCSI data
> storage.
>
> Documentation is rather chaotic and fragmented, but from what I gathered
> the path of migration is someting like:
>
> - place one host (#1), the "failover" host, into maintenance mode prior
> to backup
> - export configuration with engine-backup
> - set global maintenance mode on all hosts
>

Hi,
fine till here, then
- copy the backup file on the host you are going to use for the restore
- run something like hosted-engine --deploy
--restore-from-file=/root/engine-backup.tar.gz
- when the tool will ask about HE storage domain, provide the details to
create a new empty one
- once done, connect to the engine, set host 2 into maintenance mode and
reinstall it from the engine choosing to redeploy hosted-engine
- do the same for host 3
- at the end the previous hosted-engine storage domain will be still
visible (but renamed) in the engine; eventually migrate out other VM disks
created there; once ready you can delete it


> - install ovirt engine on that host (#1) (already installed, since this
> is HE capable host)
> - restore engine configuration using engine-backup
> - run engine-setup with new parameters regarding storage
> - after engine-setup, log into admin portal and remove old host (#1)
> - redeploy hosts #2 and #3
>
> Last two steps are a bit confusing as I'm not sure how removing old
> failover host on which new HE is running would work. Also not
> understanding the part where hosts 2 and 3 are described as
> unrecoverable (but with running VMs, which I'd have to live migrate to
> other hosts - how, if they're not operational?).
>
> Few other things:
>
> - Should I first remove & re-add host #1 without HE already deployed on
> host?
>
> - Should I set global maintenance mode on all hosts before migration?
> I'm guessing this is required if I want to prevent HE being started on
> random host during transition...
>
> - Which host should be selected as SPM during the transition phase?
>
> - How can I configure iSCSI multipathing? Self-hosted engine
> documentation mentions Multipath Helper tool, however I cannot find any
> info about it. Is this tool freely available or only a part od RHEL
> subscription?
>
> - Can I configure existing iSCSI Domain which already hosts some VMs as
> HE storage? Or do I have to assign extra LUN/target exclusively for HE?
>
> Cheers
> --
> Miha
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/R2HRFJE5UDRIA5RPEA3NO6UL6B2LAUZF/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BRIUJ2LM3LRNNL6QDDLZXTKEA5XIO5BG/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread Simone Tiraboschi
 lockspace add failure', 'No such device'))]]'
> 2019-05-17 09:03:59,134-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] HostName = host-93.home.local
> 2019-05-17 09:03:59,134-04 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] Command 'CreateStoragePoolVDSCommand(HostName =
> host-93.home.local,
> CreateStoragePoolVDSCommandParameters:{hostId='7f7408f3-5558-4f9f-81f8-fa5c3f10c3f9',
> storagePoolId='adf59b7a-78a1-11e9-82af-00163e729513',
> storagePoolName='Default',
> masterDomainId='e85f74bd-5e43-4a8c-8158-eb4696e041bc',
> domainsIdList='[e85f74bd-5e43-4a8c-8158-eb4696e041bc]
> ', masterVersion='2'})' execution failed: VDSGenericException:
> VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire
> host id: (u'e85f74bd-5e43-4a8c-8158-eb4696e041bc', SanlockException(19,
> 'Sanlock lockspace add failure', 'No such device')), code = 661
> 2019-05-17 09:03:59,134-04 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
> (default task-1) [2091c1ca] FINISH, CreateStoragePoolVDSCommand, return: ,
> log id: 58aabe3d
> 2019-05-17 09:03:59,135-04 ERROR
> [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand]
> (default task-1) [2091c1ca] Command
> 'org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand'
> failed: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS,
> error = Cannot acquire host id: (u'e85f74bd-5e43-4a8c-8158-eb4696e041bc',
> SanlockException(19, 'San
> lock lockspace add failure', 'No such device')), code = 661 (Failed with
> error AcquireHostIdFailure and code 661)
> 2019-05-17 09:03:59,137-04 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-1) [2091c1ca]
> Command [id=fc08e1d1-9383-4353-b65b-b50121596fba]: Compensating
> DELETED_OR_UPDATED_ENTITY of
> org.ovirt.engine.core.common.businessentities.StoragePool; snapshot:
> id=adf59b7a-78a1-11e9-82af-00163e729513.
> 2019-05-17 09:03:59,139-04 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-1) [2091c1ca]
> Command [id=fc08e1d1-9383-4353-b65b-b50121596fba]: Compensating
> NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
> StoragePoolIsoMapId:{storagePoolId='adf59b7a-78a1-11e9-82af-00163e729513',
> storageId='e85f74bd-5e43-4a8c-8158-eb4696e041bc'}.
> 2019-05-17 09:03:59,142-04 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-1) [2091c1ca]
> Command [id=fc08e1d1-9383-4353-b65b-b50121596fba]: Compensating
> DELETED_OR_UPDATED_ENTITY of
> org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
> snapshot: id=e85f74bd-5e43-4a8c-8158-eb4696e041bc.
> 2019-05-17 09:03:59,173-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [2091c1ca] EVENT_ID:
> USER_ATTACH_STORAGE_DOMAINS_TO_POOL_FAILED(1,003), Failed to attach Storage
> Domains to Data Center Default. (User: admin@internal-authz)
> 2019-05-17 09:03:59,179-04 INFO
> [org.ovirt.engine.core.bll.storage.pool.AddStoragePoolWithStoragesCommand]
> (default task-1) [2091c1ca] Lock freed to object
> 'EngineLock:{exclusiveLocks='[e85f74bd-5e43-4a8c-8158-eb4696e041bc=STORAGE]',
> sharedLocks=''}'
> 2019-05-17 09:03:59,192-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-1) [2091c1ca] EVENT_ID:
> USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage
> Domain hosted_storage to Data Center Default. (User: admin@internal-authz)
> 2019-05-17 09:03:59,198-04 WARN
> [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (default task-1)
> [2091c1ca] Trying to release exclusive lock which does not exist, lock key:
> 'e85f74bd-5e43-4a8c-8158-eb4696e041bcSTORAGE'
> 2019-05-17 09:03:59,198-04 INFO
> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
> (default task-1) [2091c1ca] Lock freed to object
> 'EngineLock:{exclusiveLocks='[e85f74bd-5e43-4a8c-8158-eb4696e041bc=STORAGE]',
> sharedLocks=''}'
> 2019-05-17 09:03:59,199-04 ERROR
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
> task-1) [] Operation Failed: []
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/F2WETYJGJHEUWU

[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-05-17 Thread Simone Tiraboschi
On Fri, May 17, 2019 at 4:10 PM  wrote:

> I am having a similar problem - upgrade from 4.2.7 to 4.3.3 ... My Data
> Center would not activate, and I was getting all sorts of errors on the UI.
> I ended up shutting down the existing engine VM using --shutdown-vm ...
> trying to restart it, the console would report that the volume could not be
> found and would not start. ugh.
>
> so, ssh into another host of the cluster, same thing. grr.  --deploy goes
> through most of the settings up until what I am assuming is probably 4 on
> the UI ... errors at activating storage domain. I created a separate nfs
> share so that I can hopefully import my machines that are still limping
> along, since I havent rebooted the fileserver
>
>
> 2019-05-17 09:03:59,698-0400 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "" value: "{
> "changed": false,
> "exception": "Traceback (most recent call last):\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_4b8QQJ/__main__.py\", line 664,
> in main\nstorage_domains_module.post_create_check(sd_id)\n  File
> \"/tmp/ansible_ovirt_storage_domain_payload_4b8QQJ/__main__.py\", line 526,
> in post_create_check\nid=storage_domain.id,\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\nreturn self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\nreturn future.wait() if wait else future\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\nreturn self._code(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\nself._check_fault(response)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\nself._raise_error(response
>  , body)\n  File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\nraise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> "failed": true,
> "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[]\".
> HTTP response code is 400."
> }"
>
>
> 400 - Bad request?
>

I's suggest to check engine.log on the bootstrap engine VM; unfortunately
engine error responses are not always that explicit.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2PW3IWTGGU6J3325VVHBGPIVBPWFUTNN/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADM3HMG56PZURMFPO5PRP5UOB6MAPLCO/


[ovirt-users] Re: Unable to get HE up after update

2019-05-15 Thread Simone Tiraboschi
On Mon, Oct 10, 2016 at 11:27 AM, Susinthiran Sithamparanathan <
chesu...@gmail.com> wrote:

> Hi,
> all the logs are now at https://my.owndrive.com/index.
> php/s/3Dcyho9bqo7oZs8
>
> I did a quick debug in the VM and i think we are getting closer to the
> root cause:
> https://paste.fedoraproject.org/447579/14760912/
>
> It seems the SSL/TLS certs are all missing. Now i wonder which RPM package
> does contain these so that i can try to reinstall it.
>
> Appreciate your help so far!
>
>
OK, the issue on the host is just here:
MainThread::DEBUG::2016-10-10
11:18:34,169::brokerlink::282::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
Full response: success {"reason": "failed liveliness check", "health":
"bad", "vm": "up", "detail": "up"}
MainThread::DEBUG::2016-10-10
11:18:34,169::brokerlink::255::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_checked_communicate)
Successful response from socket
MainThread::DEBUG::2016-10-10
11:18:34,170::brokerlink::151::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(get_monitor_status)
Success, status {"reason": "failed liveliness check", "health": "bad",
"vm": "up", "detail": "up"}

the engine VM goes up but the engine no and so after a certain amount of
time it tries again with a reboot.
We should definitively add a more explicit log entry there!

Now the point is just why your engine is not starting.



>
>
>
> On Mon, Oct 10, 2016 at 10:17 AM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Mon, Oct 10, 2016 at 10:13 AM, Yedidyah Bar David 
>> wrote:
>>
>>> On Mon, Oct 10, 2016 at 10:56 AM, Simone Tiraboschi 
>>> wrote:
>>> >
>>> >
>>> > On Sun, Oct 9, 2016 at 5:13 PM, Susinthiran Sithamparanathan
>>> >  wrote:
>>> >>
>>> >> Sure, here it is: https://my.owndrive.com/index.php/s/MFoFyKJVLjzezey
>>> >>
>>> >
>>> > The agent is periodically restarting the engine VM but from the logs I
>>> don't
>>> > see why.
>>>
>>> Also it keeps doing:
>>>
>>>
>> Yes, this is fine: by design ovirt-ha-agent periodically (about 30-40
>> seconds) reconnects the hosted-engine storage domain.
>>
>>
>>> MainThread::INFO::2016-10-09
>>> 17:06:01,025::hosted_engine::612::ovirt_hosted_engine_ha.age
>>> nt.hosted_engine.HostedEngine::(_initialize_vdsm)
>>> Initializing VDSM
>>> MainThread::INFO::2016-10-09
>>> 17:06:05,118::hosted_engine::639::ovirt_hosted_engine_ha.age
>>> nt.hosted_engine.HostedEngine::(_initialize_storage_images)
>>> Connecting the storage
>>> MainThread::INFO::2016-10-09
>>> 17:06:05,131::storage_server::218::ovirt_hosted_engine_ha.li
>>> b.storage_server.StorageServer::(connect_storage_server)
>>> Connecting storage server
>>> MainThread::INFO::2016-10-09
>>> 17:06:13,459::storage_server::225::ovirt_hosted_engine_ha.li
>>> b.storage_server.StorageServer::(connect_storage_server)
>>> Connecting storage server
>>> MainThread::INFO::2016-10-09
>>> 17:06:13,496::storage_server::232::ovirt_hosted_engine_ha.li
>>> b.storage_server.StorageServer::(connect_storage_server)
>>> Refreshing the storage domain
>>> MainThread::INFO::2016-10-09
>>> 17:06:13,737::hosted_engine::666::ovirt_hosted_engine_ha.age
>>> nt.hosted_engine.HostedEngine::(_initialize_storage_images)
>>> Preparing images
>>> MainThread::INFO::2016-10-09
>>> 17:06:13,737::image::126::ovirt_hosted_engine_ha.lib.image.I
>>> mage::(prepare_images)
>>> Preparing images
>>>
>>> Does this make sense, Simone?
>>>
>>> Please check/share also /var/log/vdsm/* . Thanks.
>>>
>>> > Can you please set the agent in debug mode and share again its logs?
>>> >
>>> > You have to edit /etc/ovirt-hosted-engine-ha/agent-log.conf changing
>>> from
>>> >
>>> > [logger_root]
>>> > level=INFO
>>> >
>>> > to
>>> > [logger_root]
>>> > level=DEBUG
>>> >
>>> > and then restart ovirt-ha-agent.
>>> >
>>> >
>>> >>
>>> >> On Sun, Oct 9, 2016 at 3:19 PM, Doron Fediuck 
>>> wrote:
>>> >>>
>>> >>> Can you please provide the HA agent logs?
>>> >>>
>>

[ovirt-users] Re: Unable to get HE up after update

2019-05-15 Thread Simone Tiraboschi
On Mon, Oct 10, 2016 at 11:40 AM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Oct 10, 2016 at 11:27 AM, Susinthiran Sithamparanathan <
> chesu...@gmail.com> wrote:
>
>> Hi,
>> all the logs are now at https://my.owndrive.com/index.
>> php/s/3Dcyho9bqo7oZs8
>>
>> I did a quick debug in the VM and i think we are getting closer to the
>> root cause:
>> https://paste.fedoraproject.org/447579/14760912/
>>
>> It seems the SSL/TLS certs are all missing. Now i wonder which RPM
>> package does contain these so that i can try to reinstall it.
>>
>> Appreciate your help so far!
>>
>>
> OK, the issue on the host is just here:
> MainThread::DEBUG::2016-10-10 11:18:34,169::brokerlink::282:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate) Full
> response: success {"reason": "failed liveliness check", "health": "bad",
> "vm": "up", "detail": "up"}
> MainThread::DEBUG::2016-10-10 11:18:34,169::brokerlink::255:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_checked_communicate)
> Successful response from socket
> MainThread::DEBUG::2016-10-10 11:18:34,170::brokerlink::151:
> :ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(get_monitor_status)
> Success, status {"reason": "failed liveliness check", "health": "bad",
> "vm": "up", "detail": "up"}
>
> the engine VM goes up but the engine no and so after a certain amount of
> time it tries again with a reboot.
> We should definitively add a more explicit log entry there!
>
> Now the point is just why your engine is not starting.
>

Can you please upload you engine-setup logs from the engine VM?



>
>
>
>>
>>
>>
>> On Mon, Oct 10, 2016 at 10:17 AM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Mon, Oct 10, 2016 at 10:13 AM, Yedidyah Bar David 
>>> wrote:
>>>
>>>> On Mon, Oct 10, 2016 at 10:56 AM, Simone Tiraboschi <
>>>> stira...@redhat.com> wrote:
>>>> >
>>>> >
>>>> > On Sun, Oct 9, 2016 at 5:13 PM, Susinthiran Sithamparanathan
>>>> >  wrote:
>>>> >>
>>>> >> Sure, here it is: https://my.owndrive.com/index.
>>>> php/s/MFoFyKJVLjzezey
>>>> >>
>>>> >
>>>> > The agent is periodically restarting the engine VM but from the logs
>>>> I don't
>>>> > see why.
>>>>
>>>> Also it keeps doing:
>>>>
>>>>
>>> Yes, this is fine: by design ovirt-ha-agent periodically (about 30-40
>>> seconds) reconnects the hosted-engine storage domain.
>>>
>>>
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:01,025::hosted_engine::612::ovirt_hosted_engine_ha.age
>>>> nt.hosted_engine.HostedEngine::(_initialize_vdsm)
>>>> Initializing VDSM
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:05,118::hosted_engine::639::ovirt_hosted_engine_ha.age
>>>> nt.hosted_engine.HostedEngine::(_initialize_storage_images)
>>>> Connecting the storage
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:05,131::storage_server::218::ovirt_hosted_engine_ha.li
>>>> b.storage_server.StorageServer::(connect_storage_server)
>>>> Connecting storage server
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:13,459::storage_server::225::ovirt_hosted_engine_ha.li
>>>> b.storage_server.StorageServer::(connect_storage_server)
>>>> Connecting storage server
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:13,496::storage_server::232::ovirt_hosted_engine_ha.li
>>>> b.storage_server.StorageServer::(connect_storage_server)
>>>> Refreshing the storage domain
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:13,737::hosted_engine::666::ovirt_hosted_engine_ha.age
>>>> nt.hosted_engine.HostedEngine::(_initialize_storage_images)
>>>> Preparing images
>>>> MainThread::INFO::2016-10-09
>>>> 17:06:13,737::image::126::ovirt_hosted_engine_ha.lib.image.I
>>>> mage::(prepare_images)
>>>> Preparing images
>>>>
>>>> Does this make sense, Simone?
>>>>
>>>> Please check/share also /var/log/vdsm/* . Thanks.
>>>>
>>>> > Can you please set the agent in debug mode and share again its logs?
>>>> >
>>>> > You have to edi

[ovirt-users] Re: change broker.conf configuration on shared storage

2019-05-15 Thread Simone Tiraboschi
On Fri, Oct 7, 2016 at 10:52 AM,  wrote:

> Simon, that works, thanks!  Whish list: edit the configuration from the
> web UI ;-)
>
>
I think we already have an open RFE for that, not sure about its priority.


> --
> Emanuel
>
>
>
> Von:Simone Tiraboschi 
> An:emanuel.santosvar...@mahle.com,
> Kopie:users 
> Datum:06.10.2016 12:17
> Betreff:Re: [ovirt-users] change broker.conf configuration on
> shared storage
> --
>
>
>
>
>
> On Wed, Oct 5, 2016 at 4:29 PM, <*emanuel.santosvar...@mahle.com*
> > wrote:
> hi all,
>
> hmm, broker.conf is now on the shared storage and not replicated on each
> host local file system. how to make changes to the conf, e.g.  the notify
> key?
>
> You can use something like what is documented here:
> *https://bugzilla.redhat.com/show_bug.cgi?id=1366879#c23*
> <https://bugzilla.redhat.com/show_bug.cgi?id=1366879#c23>
>
> Instead of fixing fhanswers.conf as in that script, you have to fix
> broker.conf
>
>
>
> thanks, emanuel
>
>
> ___
> Users mailing list
> *Users@ovirt.org* 
> *http://lists.ovirt.org/mailman/listinfo/users*
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7VD2OWYMS7DFT7PMPOVMDKRR7SAMZQ5/


[ovirt-users] Re: ovirt-ha-agent cpu usage

2019-05-15 Thread Simone Tiraboschi
On Fri, Oct 7, 2016 at 4:02 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 7 Oct 2016, at 15:28, Simone Tiraboschi  wrote:
>
>
>
> On Fri, Oct 7, 2016 at 3:25 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> On 7 Oct 2016, at 14:59, Nir Soffer  wrote:
>>
>> On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>>
>>>
>>> On 7 Oct 2016, at 14:42, Nir Soffer  wrote:
>>>
>>> On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer  wrote:
>>>>
>>>>> On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi <
>>>>> stira...@redhat.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Oct 5, 2016 at 9:17 AM, gregor 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> did you found a solution or cause for this high CPU usage?
>>>>>>> I have installed the self hosted engine on another server and there
>>>>>>> is
>>>>>>> no VM running but ovirt-ha-agent uses heavily the CPU.
>>>>>>>
>>>>>>
>>>>>> Yes, it's due to the fact that ovirt-ha-agent periodically reconnects
>>>>>> over json rpc and this is CPU intensive since the client has to parse the
>>>>>> yaml API specification each time it connects.
>>>>>>
>>>>>
>>> wasn’t it suppose to be fixed to reuse the connection? Like all the
>>> other clients (vdsm migration code:-)
>>>
>>
>> This is orthogonal issue.
>>
>>
>> Yes it is. And that’s the issue;-)
>> Both are wrong, but by “fixing” the schema validation only you lose the
>> motivation to fix the meaningless wasteful reconnect
>>
>
> Yes, we are going to fix that too ( https://bugzilla.redhat.com/
> show_bug.cgi?id=1349829 )
>
>
> that’s great! Also al the other vdsClient uses?:-)
>

https://gerrit.ovirt.org/#/c/62729/


> What is that periodic one call anyway? Is there only one? Maybe we don’t
> need it so much.
>

Currently ovirt-ha-agent is periodically reconnecting the hosted-engine
storage domain and checking its status. This is already on jsonrpc.
In 4.1 all the monitoring will be moved to jsonrpc.


>
> but it would require also https://bugzilla.redhat.com/
> show_bug.cgi?id=1376843 to be fixed.
>
>
> This is less good. Well, worst case you can reconnect yourself, all you
> need is a notification when the existing connection breaks
>
>
>
>>
>>
>>
>>> Does schema validation matter then if there would be only one connection
>>> at the start up?
>>>
>>
>> Loading once does not help command line tools like vdsClient,
>> hosted-engine and
>> vdsm-tool.
>>
>>
>> none of the other tools is using json-rpc.
>>
>
> hosted-engine-setup is, and sooner or later we'll have to migrate also the
> remaining tools since xmlrpc has been deprecated with 4.0
>
>
> ok. though setup is a one-time action so it’s not an issue there
>
>
>
>>
>>
>> Nir
>>
>>
>>>
>>>
>>>>> Simone, reusing the connection is good idea anyway, but what you
>>>>> describe is
>>>>> a bug in the client library. The library does *not* need to load and
>>>>> parse the
>>>>> schema at all for sending requests to vdsm.
>>>>>
>>>>> The schema is only needed if you want to verify request parameters,
>>>>> or provide online help, these are not needed in a client library.
>>>>>
>>>>> Please file an infra bug about it.
>>>>>
>>>>
>>>> Done, https://bugzilla.redhat.com/show_bug.cgi?id=1381899
>>>>
>>>
>>> Here is a patch that should eliminate most most of the problem:
>>> https://gerrit.ovirt.org/65230
>>>
>>> Would be nice if it can be tested on the system showing this problem.
>>>
>>> Cheers,
>>> Nir
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>> ___
>> Users mailing list

[ovirt-users] Re: ovirt-ha-agent cpu usage

2019-05-15 Thread Simone Tiraboschi
On Fri, Oct 7, 2016 at 3:22 PM, Gianluca Cecchi 
wrote:

> On Fri, Oct 7, 2016 at 2:59 PM, Nir Soffer  wrote:
>
>>
>>> wasn’t it suppose to be fixed to reuse the connection? Like all the
>>> other clients (vdsm migration code:-)
>>>
>>
>> This is orthogonal issue.
>>
>>
>>> Does schema validation matter then if there would be only one connection
>>> at the start up?
>>>
>>
>> Loading once does not help command line tools like vdsClient,
>> hosted-engine and
>> vdsm-tool.
>>
>> Nir
>>
>>
>>>
>>>
> Simone, reusing the connection is good idea anyway, but what you
> describe is
> a bug in the client library. The library does *not* need to load and
> parse the
> schema at all for sending requests to vdsm.
>
> The schema is only needed if you want to verify request parameters,
> or provide online help, these are not needed in a client library.
>
> Please file an infra bug about it.
>

 Done, https://bugzilla.redhat.com/show_bug.cgi?id=1381899

>>>
>>> Here is a patch that should eliminate most most of the problem:
>>> https://gerrit.ovirt.org/65230
>>>
>>> Would be nice if it can be tested on the system showing this problem.
>>>
>>> Cheers,
>>> Nir
>>> ___
>>>
>>>
>
> this is a video of 1 minute with the same system as the first post, but in
> 4.0.3 now and the same 3 VMs powered on without any particular load.
> It seems very similar to the previous 3.6.6 in cpu used by ovirt-ha-agent.
>
> https://drive.google.com/file/d/0BwoPbcrMv8mvSjFDUERzV1owTG8/
> view?usp=sharing
>
> Enjoy Nir ;-)
>
> If I can apply the patch also to 4.0.3 I'm going to see if there is then a
> different behavior.
> Let me know,
>
>
I'm trying it right now.
Any other tests will be really appreciated.

The patch is pretty simply, you can apply that on the fly.
You have to shutdown ovirt-ha-broker and ovirt-ha-agent; then you could
directly edit
/usr/lib/python2.7/site-packages/api/vdsmapi.py
around line 97 changing from
loaded_schema = yaml.load(f)
to
loaded_schema = yaml.load(f, Loader=yaml.CLoader)
Please pay attention to keep exactly the same amount of initial spaces.

Then you can simply restart the HA agent and check.

Gianluca
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PNLP5ARR2OU6XN74AIA6O26ZERCF3DTR/


[ovirt-users] Re: change broker.conf configuration on shared storage

2019-05-15 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 4:29 PM,  wrote:

> hi all,
>
> hmm, broker.conf is now on the shared storage and not replicated on each
> host local file system. how to make changes to the conf, e.g.  the notify
> key?
>

You can use something like what is documented here:
https://bugzilla.redhat.com/show_bug.cgi?id=1366879#c23

Instead of fixing fhanswers.conf as in that script, you have to fix
broker.conf



>
> thanks, emanuel
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5LZX3AWC4TWXUQYTM4HBAMZTXZJFDFY/


[ovirt-users] Re: unable to upgrade from ovirt 3.6 to 4.0.4

2019-05-15 Thread Simone Tiraboschi
Hi Alon,
you just need the backup/restore backup procedure if you have to change the
OS of the server where you installed your engine.
It it was already at el7 you don't need that.

In every case the upgrade to 4.0 is requiring that all your cluster and all
your data center should be at 3.6 compatibility level before the upgrade.
>From the error you got, it seams that you still have something at 3.5.




On Thu, Oct 6, 2016 at 11:31 AM, Alon Dotan  wrote:

> Hey all,
> Im trying to upgrade my ovirt installation,
> CentOS 7.2 fully updated
>
> engine version: oVirt Engine Version: 3.6.7.5-1.el7.centos
>
> hosts version:
> OS Version:
> RHEL - 7 - 2.1511.el7.centos.2.10
> Kernel Version:
> 3.10.0 - 327.36.1.el7.x86_64
> KVM Version:
> 2.3.0 - 31.el7_2.10.1
> LIBVIRT Version:
> libvirt-1.2.17-13.el7_2.5
> VDSM Version:
> vdsm-4.17.32-1.el7
> SPICE Version:
> 0.12.4 - 15.el7_2.2
> GlusterFS Version:
> [N/A]
> CEPH Version:
>
>
> got the following error after deploying the backup tar (following this
> guide http://www.ovirt.org/documentation/migration-engine-3.6-to-4.0/)
>
> Failed to execute stage 'Setup validation': Trying to upgrade from
> unsupported versions: 3.5
>
> attaching the engine-setup answer and log
>
> Thanks,
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VCC3IZG4YCW23Y2OKTWDKMQCMLT3X2Z3/


[ovirt-users] Re: Correct recovery procedure of the oVirt Hosted Engine 4.0

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 10:30 AM,  wrote:

> Well.
> Then, in the case of conditions:
>
> 1) the vm is not available anymore due to storage corruption
> 2) an empty shared storage is available
> 3) engine backup exists
> 4) all VMs still running on the hosts in the cluster
>
>
> The recovery plan will be like this (as I understand it):
>
>
> 1) On all the hosts (if they are still available):
>
> # service ovirt-ha-broker stop
> # service ovirt-ha-agent stop
> # chkconfig --del ovirt-ha-broker
> # chkconfig --del ovirt-ha-agent
>
>
> 2) On first host (if the original host is not available anymore, provision
> a new host from scratch and proceed on this new host):
>
>   2.1) # hosted-engine --deploy
>
>  ◾use same fqdn you had previously in the HE VM.
>  ◾point to the new shared storage
>  ◾provide the same admin password you used in previous setup
>  ◾install the OS on the vm
>

I'd suggest to use the engine appliance also for this.
You can just say No when it asks about automatically running engine-setup.


>  ◾confirm it has been installed
>
>  on Hosted Engine VM:
>
>   a) Install the ovirt-engine rpms on the vm but don't run engine-setup:
>   # yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.
> rpm
>   # yum install epel-release
>   # yum install ovirt-engine
>   b) Restore the backup:
>   # engine-backup --mode=restore --file=file_name --log=log_file_name
> --provision-db --provision-dwh-db --restore-permissions
>

In order to let the engine auto-import the new hosted-engine storage
domain, you have to remove the old one.
The same for the engine VM. Unfortunately you cannot do that from the
engine since they are somehow protected to avoid unintentional damages.
The easiest way is to remove them from the DB before running engine-setup.
I'm working on a helper utility to make it easiser:
https://gerrit.ovirt.org/#/c/64966/
I think I'll integrate it with engine-backup to simply do it with an
additional CLI flag.


>   c) Run "engine-setup"
>
>2.2) Open Administration Portal and remove the all old hosts used for
> Hosted Engine
>

Right, we can also integrate this step in the HE cleaning helper.


>
>2.3) Confirm that the engine has been installed (Return to the host and
> continue the hosted-engine deployment script by selecting option 1) and
> then finish the deploy.
>
>2.4) In Administration Portal activate new host
>
>
> 3) On all additional hosts run "hosted-engine --deploy".
>

I strongly suggest to deploy them from the engine and not from CLI.
CLI deploy support for additional HE host is deprecated an it will be
removed in 4.1.


>
>
> Right?
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TENQW55C7VVJCR4ZVIBEJ7YXHSHV2X7T/


[ovirt-users] Re: Correct recovery procedure of the oVirt Hosted Engine 4.0

2019-05-14 Thread Simone Tiraboschi
On Thu, Oct 6, 2016 at 7:32 AM,  wrote:

> Hi Simone.
> When can we expect a new version of the engine-backup with built-in
> cleaning helper?
>

That bug is targeted to 4.1


>
> 05.10.2016, 13:52, "Simone Tiraboschi" :
> > On Wed, Oct 5, 2016 at 12:40 PM,  wrote:
> >> Ouch. It is beyond my understanding.
> >>
> >> Thus, it appears that described in the RHV4 guide (
> https://access.redhat.com/documentation/en/red-hat-
> virtualization/4.0/single/self-hosted-engine-guide/#
> sect-Restoring_SHE_bkup) recovery procedure in fact incomplete?
> > Yes, you are right although this is a kind of special case since we are
> moving/restoring to a different storage domain while you are not asked to
> remove the old storage if you are restoring in place.
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WUIE2ND3TPLV5Y6FTYCW4SMPVZJM3OQ6/


[ovirt-users] Re: ovirt-ha-agent cpu usage

2019-05-14 Thread Simone Tiraboschi
On Fri, Oct 7, 2016 at 3:25 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 7 Oct 2016, at 14:59, Nir Soffer  wrote:
>
> On Fri, Oct 7, 2016 at 3:52 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> On 7 Oct 2016, at 14:42, Nir Soffer  wrote:
>>
>> On Wed, Oct 5, 2016 at 1:33 PM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer  wrote:
>>>
>>>> On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi >>> > wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Oct 5, 2016 at 9:17 AM, gregor  wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> did you found a solution or cause for this high CPU usage?
>>>>>> I have installed the self hosted engine on another server and there is
>>>>>> no VM running but ovirt-ha-agent uses heavily the CPU.
>>>>>>
>>>>>
>>>>> Yes, it's due to the fact that ovirt-ha-agent periodically reconnects
>>>>> over json rpc and this is CPU intensive since the client has to parse the
>>>>> yaml API specification each time it connects.
>>>>>
>>>>
>> wasn’t it suppose to be fixed to reuse the connection? Like all the other
>> clients (vdsm migration code:-)
>>
>
> This is orthogonal issue.
>
>
> Yes it is. And that’s the issue;-)
> Both are wrong, but by “fixing” the schema validation only you lose the
> motivation to fix the meaningless wasteful reconnect
>

Yes, we are going to fix that too (
https://bugzilla.redhat.com/show_bug.cgi?id=1349829 ) but it would require
also https://bugzilla.redhat.com/show_bug.cgi?id=1376843 to be fixed.


>
>
>
>> Does schema validation matter then if there would be only one connection
>> at the start up?
>>
>
> Loading once does not help command line tools like vdsClient,
> hosted-engine and
> vdsm-tool.
>
>
> none of the other tools is using json-rpc.
>

hosted-engine-setup is, and sooner or later we'll have to migrate also the
remaining tools since xmlrpc has been deprecated with 4.0


>
>
> Nir
>
>
>>
>>
>>>> Simone, reusing the connection is good idea anyway, but what you
>>>> describe is
>>>> a bug in the client library. The library does *not* need to load and
>>>> parse the
>>>> schema at all for sending requests to vdsm.
>>>>
>>>> The schema is only needed if you want to verify request parameters,
>>>> or provide online help, these are not needed in a client library.
>>>>
>>>> Please file an infra bug about it.
>>>>
>>>
>>> Done, https://bugzilla.redhat.com/show_bug.cgi?id=1381899
>>>
>>
>> Here is a patch that should eliminate most most of the problem:
>> https://gerrit.ovirt.org/65230
>>
>> Would be nice if it can be tested on the system showing this problem.
>>
>> Cheers,
>> Nir
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C45E4PS6WM6UPOJHATKPVSAS57EAGH35/


[ovirt-users] Re: Ovirt setup

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 9:51 PM, Bryan Sockel  wrote:

> Hi,
>
> I am getting an error attempting to install ovirt on a pair of bonded
> nics.  The error that we are getting is Cannot acquire nic/bridge address.
>

Are you trying hosted-engine?
Can you please attach your logs?


>
> We typically run all our servers with an active/backup setup.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NWJGQRHGCEGFP2Q3DVNSRN5JNN2KURR2/


[ovirt-users] Re: change broker.conf configuration on shared storage

2019-05-14 Thread Simone Tiraboschi
On Fri, Oct 7, 2016 at 11:12 AM, Gianluca Cecchi 
wrote:

>
> On Fri, Oct 7, 2016 at 11:04 AM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Fri, Oct 7, 2016 at 10:52 AM,  wrote:
>>
>>> Simon, that works, thanks!  Whish list: edit the configuration from the
>>> web UI ;-)
>>>
>>>
>> I think we already have an open RFE for that, not sure about its priority.
>>
>>
>>> --
>>
>>
>
> I think there is this one, that was related to SMTP server settings change:
> https://bugzilla.redhat.com/show_bug.cgi?id=1301681
>

Yes, exactly. Thanks Gianluca.
I added the small replace script also there as a reference.


>
>
> Gianluca
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3D6Z6FQQXNBYMKCS526PUH3CWJAXHQVT/


[ovirt-users] Re: ovirt-ha-agent cpu usage

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 10:34 AM, Nir Soffer  wrote:

> On Wed, Oct 5, 2016 at 10:24 AM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Wed, Oct 5, 2016 at 9:17 AM, gregor  wrote:
>>
>>> Hi,
>>>
>>> did you found a solution or cause for this high CPU usage?
>>> I have installed the self hosted engine on another server and there is
>>> no VM running but ovirt-ha-agent uses heavily the CPU.
>>>
>>
>> Yes, it's due to the fact that ovirt-ha-agent periodically reconnects
>> over json rpc and this is CPU intensive since the client has to parse the
>> yaml API specification each time it connects.
>>
>
> Simone, reusing the connection is good idea anyway, but what you describe
> is
> a bug in the client library. The library does *not* need to load and parse
> the
> schema at all for sending requests to vdsm.
>
> The schema is only needed if you want to verify request parameters,
> or provide online help, these are not needed in a client library.
>
> Please file an infra bug about it.
>

Done, https://bugzilla.redhat.com/show_bug.cgi?id=1381899
Thanks.


> Nir
>
>
>> The issue is tracked here:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1349829 - ovirt-ha-agent
>> should reuse json-rpc connections
>> but it depends on:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1376843 - [RFE] Implement a
>> keep-alive with reconnect if needed logic for the python jsonrpc client
>>
>>
>>
>>>
>>> cheers
>>> gregor
>>>
>>> On 08/08/16 15:09, Gianluca Cecchi wrote:
>>> > On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan >> > <mailto:rgo...@redhat.com>> wrote:
>>> >
>>> > Does the spikes correlates with info messages on extracting the
>>> ovf?
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > yes, it seems so and it happens every 14-15 seconds
>>> >
>>> > These are the lines I see scrolling in agent.log when I notice cpu
>>> > spikes in ovirt-ha-agent...
>>> >
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.li
>>> b.storage_server.StorageServer::(connect_storage_server)
>>> > Connecting storage server
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.li
>>> b.storage_server.StorageServer::(connect_storage_server)
>>> > Refreshing the storage domain
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha.age
>>> nt.hosted_engine.HostedEngine::(_initialize_storage_images)
>>> > Preparing images
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib.image.I
>>> mage::(prepare_images)
>>> > Preparing images
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha.age
>>> nt.hosted_engine.HostedEngine::(_initialize_storage_images)
>>> > Reloading vm.conf from the shared storage domain
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent.host
>>> ed_engine.HostedEngine.config::(refresh_local_conf_file)
>>> > Trying to get a fresher copy of vm configuration from the OVF_STORE
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf
>>> .ovf_store.OVFStore::(scan)
>>> > Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f,
>>> > volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib.ovf
>>> .ovf_store.OVFStore::(scan)
>>> > Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6,
>>> > volUUID:1a18851e-6858-401c-be6e-af14415034b5
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib.ovf
>>> .ovf_store.OVFStore::(getEngineVMOVF)
>>> > Extracting Engine VM OVF from the OVF_STORE
>>> > MainThread::INFO::2016-08-08
>>> > 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib.ovf
>>> .ovf_store.OVFStore::(getEngineVMOVF)
>>> > OVF_STORE volume path:
>>> > /rhev/data-center/mnt/ovirt01.lutwyn.org:_SHE__DOMAIN/31a9e9
>>> fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6

[ovirt-users] Re: Unable to find OVF_STORE after recovery / upgrade

2019-05-14 Thread Simone Tiraboschi
On Mon, Oct 3, 2016 at 6:47 PM, Sam Cappello  wrote:

> Hi,
> so i was running a 3.4 hosted engine two node setup on centos 6, had some
> disk issues so i tried to upgrade to centos 7 and follow the path 3.4 > 3.5
> > 3.6 > 4.0.  i screwed up dig time somewhere between 3.6 and 4.0, so i
> wiped the drives, installed a fresh 4.0.3, then created the database and
> restored the 3.6 engine backup before running engine-setup as per the
> docs.   things seemed to work, but i have the the following issues /
> symptoms:
> - ovirt-ha-agent running 100% CPU on both nodes
> - messages in the UI that the Hosted Engine storage Domain isn't active
> and Failed to import the Hosted Engine Storage Domain
> - hosted engine is not visible in the UI
> and the following repeating in the agent.log:
>
> MainThread::INFO::2016-10-03 12:38:27,718::hosted_engine::
> 461::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 3400)
> MainThread::INFO::2016-10-03 12:38:27,720::hosted_engine::
> 466::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host vmhost1.oracool.net (id: 1, score: 3400)
> MainThread::INFO::2016-10-03 12:38:37,979::states::421::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) Engine
> vm running on localhost
> MainThread::INFO::2016-10-03 12:38:37,985::hosted_engine::
> 612::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_vdsm) Initializing VDSM
> MainThread::INFO::2016-10-03 12:38:45,645::hosted_engine::
> 639::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Connecting the storage
> MainThread::INFO::2016-10-03 12:38:45,647::storage_server::
> 219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2016-10-03 12:39:00,543::storage_server::
> 226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Connecting storage server
> MainThread::INFO::2016-10-03 12:39:00,562::storage_server::
> 233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
> Refreshing the storage domain
> MainThread::INFO::2016-10-03 12:39:01,235::hosted_engine::
> 666::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Preparing images
> MainThread::INFO::2016-10-03 12:39:01,236::image::126::
> ovirt_hosted_engine_ha.lib.image.Image::(prepare_images) Preparing images
> MainThread::INFO::2016-10-03 12:39:09,295::hosted_engine::
> 669::ovirt_hosted_engine_ha.agent.hosted_engine.
> HostedEngine::(_initialize_storage_images) Reloading vm.conf from the
> shared storage domain
> MainThread::INFO::2016-10-03 12:39:09,296::config::206::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(refresh_local_conf_file) Trying to get a fresher copy of vm
> configuration from the OVF_STORE
> MainThread::WARNING::2016-10-03 12:39:16,928::ovf_store::107::
> ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Unable to find
> OVF_STORE
>

The engine will automatically create it once the hosted-engine storage
domain and the engine VM are correctly been imported.


> MainThread::ERROR::2016-10-03 12:39:16,934::config::235::
> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.
> config::(refresh_local_conf_file) Unable to get vm.conf from OVF_STORE,
> falling back to initial vm.conf
>
> I have searched a bit and not really found a solution, and have come to
> the conclusion that i have made a mess of things, and am wondering if the
> best solution is to export the VMs, and reinstall everything then import
> them back?
> i am using remote  NFS storage.
> if i try and add the hosted engine storage domain it says it is already
> registered.
>

The best option here is to manually remove it from the DB and let the
engine import it again.
I'm working on an helper utility here but it's still not fully tested:
https://gerrit.ovirt.org/#/c/64966/


> i have also upgraded and am now running oVirt Engine Version:
> 4.0.4.4-1.el7.centos
> hosts were installed using ovirt-node.  currently at
> 3.10.0-327.28.3.el7.x86_64
> if a fresh install is best, any advice / pointer to doc that explains best
> way to do this?
> i have not moved my most important server over to this cluster yet so i
> can take some downtime to reinstall.
> thanks!
> sam
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to 

[ovirt-users] Re: Correct recovery procedure of the oVirt Hosted Engine 4.0

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 12:40 PM,  wrote:

> Ouch. It is beyond my understanding.
>
> Thus, it appears that described in the RHV4 guide (
> https://access.redhat.com/documentation/en/red-hat-
> virtualization/4.0/single/self-hosted-engine-guide/#
> sect-Restoring_SHE_bkup) recovery procedure in fact incomplete?
>

Yes, you are right although this is a kind of special case since we are
moving/restoring to a different storage domain while you are not asked to
remove the old storage if you are restoring in place.

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EJJGA6Y63IPVG42MSYXQY26GPZMLYRXW/


[ovirt-users] Re: 4.0 - 2nd node fails on deploy

2019-05-14 Thread Simone Tiraboschi
On Mon, Oct 3, 2016 at 12:45 AM, Jason Jeffrey  wrote:

> Hi,
>
>
>
> I am trying to build a x3 HC cluster, with a self hosted engine using
> gluster.
>
>
>
> I have successful built the 1st node,  however when I attempt to run
> hosted-engine –deploy on node 2, I get the following error
>
>
>
> [WARNING] A configuration file must be supplied to deploy Hosted Engine on
> an additional host.
>
> [ ERROR ] 'version' is not stored in the HE configuration image
>
> [ ERROR ] Unable to get the answer file from the shared storage
>
> [ ERROR ] Failed to execute stage 'Environment customization': Unable to
> get the answer file from the shared storage
>
> [ INFO  ] Stage: Clean up
>
> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
> setup/answers/answers-20161002232505.conf'
>
> [ INFO  ] Stage: Pre-termination
>
> [ INFO  ] Stage: Termination
>
> [ ERROR ] Hosted Engine deployment failed
>
>
>
> Looking at the failure in the log file..
>

Can you please attach hosted-engine-setup logs from the first host?


>
>
> 2016-10-02 23:25:05 WARNING otopi.plugins.gr_he_common.core.remote_answerfile
> remote_answerfile._customization:151 A configuration
>
> file must be supplied to deploy Hosted Engine on an additional host.
>
> 2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
> remote_answerfile._fetch_answer_file:61 _fetch_answer_f
>
> ile
>
> 2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
> remote_answerfile._fetch_answer_file:69 fetching from:
>
> /rhev/data-center/mnt/glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-
> fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/7
>
> 8cb2527-a2e2-489a-9fad-465a72221b37
>
> 2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
> heconflib._dd_pipe_tar:69 executing: 'sudo -u vdsm dd i
>
> f=/rhev/data-center/mnt/glusterSD/dcastor02:engine/
> 0a021563-91b5-4f49-9c6b-fff45e85a025/images/f055216c-
> 02f9-4cd1-a22c-d6b56a0a8e9b
>
> /78cb2527-a2e2-489a-9fad-465a72221b37 bs=4k'
>
> 2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
> heconflib._dd_pipe_tar:70 executing: 'tar -tvf -'
>
> 2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
> heconflib._dd_pipe_tar:88 stdout:
>
> 2016-10-02 23:25:05 DEBUG otopi.plugins.gr_he_common.core.remote_answerfile
> heconflib._dd_pipe_tar:89 stderr:
>
> 2016-10-02 23:25:05 ERROR otopi.plugins.gr_he_common.core.remote_answerfile
> heconflib.validateConfImage:111 'version' is not stored
>
> in the HE configuration image
>
> 2016-10-02 23:25:05 ERROR otopi.plugins.gr_he_common.core.remote_answerfile
> remote_answerfile._fetch_answer_file:73 Unable to get t
>
> he answer file from the shared storage
>
>
>
> Looking at the detected gluster path - /rhev/data-center/mnt/
> glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-
> fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/
>
>
>
> [root@dcasrv02 ~]# ls -al /rhev/data-center/mnt/
> glusterSD/dcastor02:engine/0a021563-91b5-4f49-9c6b-
> fff45e85a025/images/f055216c-02f9-4cd1-a22c-d6b56a0a8e9b/
>
> total 1049609
>
> drwxr-xr-x. 2 vdsm kvm   4096 Oct  2 04:46 .
>
> drwxr-xr-x. 6 vdsm kvm   4096 Oct  2 04:46 ..
>
> -rw-rw. 1 vdsm kvm 1073741824 Oct  2 04:46 78cb2527-a2e2-489a-9fad-
> 465a72221b37
>
> -rw-rw. 1 vdsm kvm1048576 Oct  2 04:46 78cb2527-a2e2-489a-9fad-
> 465a72221b37.lease
>
> -rw-r--r--. 1 vdsm kvm294 Oct  2 04:46 
> 78cb2527-a2e2-489a-9fad-465a72221b37.meta
>
>
>
>
> 78cb2527-a2e2-489a-9fad-465a72221b37 is  a 1 GB file, is this the engine
> VM ?
>
>
>
> Copying the answers file form primary (/etc/ovirt-hosted-engine/answers.conf
> ) to  node 2 and rerunning produces the same error : (
>
> (hosted-engine --deploy  --config-append=/root/answers.conf )
>
>
>
> Also tried on node 3, same issues
>
>
>
> Happy to provide logs and other debugs
>
>
>
> Thanks
>
>
>
> Jason
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/

[ovirt-users] Re: 4.0 - 2nd node fails on deploy

2019-05-14 Thread Simone Tiraboschi
gid: 36
>
>
>
> Volume Name: iso
>
> Type: Replicate
>
> Volume ID: b2d3d7e2-9919-400b-8368-a0443d48e82a
>
> Status: Started
>
> Number of Bricks: 1 x (2 + 1) = 3
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: dcastor01:/xpool/iso/brick
>
> Brick2: dcastor02:/xpool/iso/brick
>
> Brick3: dcastor03:/xpool/iso/brick (arbiter)
>
> Options Reconfigured:
>
> performance.readdir-ahead: on
>
> storage.owner-uid: 36
>
> storage.owner-gid: 36
>
>
>
>
>
> [root@dcasrv01 fd44dbf9-473a-496a-9996-c8abe3278390]# gluster volume
> status
>
> Status of volume: data
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick dcastor01:/xpool/data/brick   49153 0  Y
> 3076
>
> Brick dcastor03:/xpool/data/brick   49153 0  Y
> 3019
>
> Brick dcastor02:/xpool/data/bricky  49153 0  Y
> 3857
>
> NFS Server on localhost 2049  0  Y
> 3097
>
> Self-heal Daemon on localhost   N/A   N/AY
> 3088
>
> NFS Server on dcastor03 2049  0  Y
> 3039
>
> Self-heal Daemon on dcastor03   N/A   N/AY
> 3114
>
> NFS Server on dcasrv02  2049  0  Y
> 3871
>
> Self-heal Daemon on dcasrv02N/A   N/AY
> 3864
>
>
>
> Task Status of Volume data
>
> 
> --
>
> There are no active volume tasks
>
>
>
> Status of volume: engine
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick dcastor01:/xpool/engine/brick 49152 0  Y
> 3131
>
> Brick dcastor02:/xpool/engine/brick 49152 0  Y
> 3852
>
> Brick dcastor03:/xpool/engine/brick 49152 0  Y
> 2992
>
> NFS Server on localhost 2049  0  Y
> 3097
>
> Self-heal Daemon on localhost   N/A   N/AY
> 3088
>
> NFS Server on dcastor03 2049  0  Y
> 3039
>
> Self-heal Daemon on dcastor03   N/A   N/AY
> 3114
>
> NFS Server on dcasrv02  2049  0  Y
> 3871
>
> Self-heal Daemon on dcasrv02N/A   N/AY
> 3864
>
>
>
> Task Status of Volume engine
>
> 
> --
>
> There are no active volume tasks
>
>
>
> Status of volume: export
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick dcastor02:/xpool/export/brick 49155 0  Y
> 3872
>
> Brick dcastor03:/xpool/export/brick 49155 0  Y
> 3147
>
> Brick dcastor01:/xpool/export/brick 49155 0  Y
> 3150
>
> NFS Server on localhost 2049  0  Y
> 3097
>
> Self-heal Daemon on localhost   N/A   N/AY
> 3088
>
> NFS Server on dcastor03 2049  0  Y
> 3039
>
> Self-heal Daemon on dcastor03   N/A   N/AY
> 3114
>
> NFS Server on dcasrv02  2049  0  Y
> 3871
>
> Self-heal Daemon on dcasrv02N/A   N/AY
> 3864
>
>
>
> Task Status of Volume export
>
> 
> --
>
> There are no active volume tasks
>
>
>
> Status of volume: iso
>
> Gluster process TCP Port  RDMA Port  Online
> Pid
>
> 
> --
>
> Brick dcastor01:/xpool/iso/brick    49154 0  Y
> 3152
>
> Brick dcastor02:/xpool/iso/brick49154 0  Y
> 3881
>
> Brick dcastor03:/xpool/iso/brick49154 0  Y
> 3146
>
> NFS Server on localhost 2049  0  Y
> 3097
>
> Self-heal Daemon on localhost   N/A   N/AY
> 3088
>
> NFS Server on dcastor03 2049  0  Y
> 3039
>
> Self-heal Daemon on dcastor03   N/A   N/AY
> 3114
>
> NFS Se

[ovirt-users] Re: Correct recovery procedure of the oVirt Hosted Engine 4.0

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 11:56 AM,  wrote:

> Weird. The RHV4 guides not contain the information that we need to clean
> the database from old storage domain before running the command
> engine-setup.
> What specific actions do we need?
>

You can check this as a reference but take care because it's still not
fully tested:
https://gerrit.ovirt.org/#/c/64966/3/packaging/setup/dbutils/hecleaner_sp.sql


> Eventually I want to get a full recovery plan at the moment for oVirt 4.0.
>
> 05.10.2016, 12:07, "Simone Tiraboschi" :
>
>
>
> On Wed, Oct 5, 2016 at 10:30 AM,  wrote:
>
> Well.
> Then, in the case of conditions:
>
> 1) the vm is not available anymore due to storage corruption
> 2) an empty shared storage is available
> 3) engine backup exists
> 4) all VMs still running on the hosts in the cluster
>
>
> The recovery plan will be like this (as I understand it):
>
>
> 1) On all the hosts (if they are still available):
>
> # service ovirt-ha-broker stop
> # service ovirt-ha-agent stop
> # chkconfig --del ovirt-ha-broker
> # chkconfig --del ovirt-ha-agent
>
>
> 2) On first host (if the original host is not available anymore, provision
> a new host from scratch and proceed on this new host):
>
>   2.1) # hosted-engine --deploy
>
>  ◾use same fqdn you had previously in the HE VM.
>  ◾point to the new shared storage
>  ◾provide the same admin password you used in previous setup
>  ◾install the OS on the vm
>
>
> I'd suggest to use the engine appliance also for this.
> You can just say No when it asks about automatically running engine-setup.
>
>
>  ◾confirm it has been installed
>
>  on Hosted Engine VM:
>
>   a) Install the ovirt-engine rpms on the vm but don't run engine-setup:
>   # yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release40.
> rpm
>   # yum install epel-release
>   # yum install ovirt-engine
>   b) Restore the backup:
>   # engine-backup --mode=restore --file=file_name --log=log_file_name
> --provision-db --provision-dwh-db --restore-permissions
>
>
> In order to let the engine auto-import the new hosted-engine storage
> domain, you have to remove the old one.
> The same for the engine VM. Unfortunately you cannot do that from the
> engine since they are somehow protected to avoid unintentional damages.
> The easiest way is to remove them from the DB before running engine-setup.
> I'm working on a helper utility to make it easiser:
> https://gerrit.ovirt.org/#/c/64966/
> I think I'll integrate it with engine-backup to simply do it with an
> additional CLI flag.
>
>
>   c) Run "engine-setup"
>
>2.2) Open Administration Portal and remove the all old hosts used for
> Hosted Engine
>
>
> Right, we can also integrate this step in the HE cleaning helper.
>
>
>
>2.3) Confirm that the engine has been installed (Return to the host and
> continue the hosted-engine deployment script by selecting option 1) and
> then finish the deploy.
>
>2.4) In Administration Portal activate new host
>
>
> 3) On all additional hosts run "hosted-engine --deploy".
>
>
> I strongly suggest to deploy them from the engine and not from CLI.
> CLI deploy support for additional HE host is deprecated an it will be
> removed in 4.1.
>
>
> Right?
>
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UKIBD5QW7FBXGGQGEUMSGWV2QTYY77Z/


[ovirt-users] Re: Slow first opening web portal when the ovirt-engine.service is restarted

2019-05-14 Thread Simone Tiraboschi
On Mon, Oct 3, 2016 at 12:17 PM,  wrote:

> Wow. I have installed and enabled the service on Hosted Engine VM:
>
> # yum -y install haveged
> # service haveged start
> # systemctl enable haveged.service
> # service haveged status
>
> Redirecting to /bin/systemctl status  haveged.service
> ● haveged.service - Entropy Daemon based on the HAVEGE algorithm
>Loaded: loaded (/usr/lib/systemd/system/haveged.service; enabled;
> vendor preset: disabled)
>Active: active (running) since Mon 2016-10-03 12:56:24 MSK; 2min 12s ago
>  Docs: man:haveged(8)
>http://www.issihosts.com/haveged/
>  Main PID: 5304 (haveged)
>CGroup: /system.slice/haveged.service
>└─5304 /usr/sbin/haveged -w 1024 -v 1 --Foreground
>
> Oct 03 12:56:24 KOM-AD01-OVIRT1 systemd[1]: Started Entropy Daemon based
> on the HAVEGE algorithm.
> Oct 03 12:56:24 KOM-AD01-OVIRT1 systemd[1]: Starting Entropy Daemon based
> on the HAVEGE algorithm...
> Oct 03 12:56:24 KOM-AD01-OVIRT1 haveged[5304]: haveged: ver: 1.9.1; arch:
> x86; vend: GenuineIntel; build: (gcc 4.8.2 ITV); collect: 128K
> Oct 03 12:56:24 KOM-AD01-OVIRT1 haveged[5304]: haveged: cpu: (L4 VC);
> data: 32K (L2 L4 V); inst: 32K (L2 L4 V); idx: 21/40; sz: 32709/60538
> Oct 03 12:56:24 KOM-AD01-OVIRT1 haveged[5304]: haveged: tot tests(BA8):
> A:1/1 B:1/1 continuous tests(B):  last entropy estimate 8.00013
> Oct 03 12:56:24 KOM-AD01-OVIRT1 haveged[5304]: haveged: fills: 0,
> generated: 0
>
>
> And now after restarting the ovirt-engine.service, first open the web
> portal pages is instantaneous!
> It works.
> Thank you.
>

Yes, the issue was the lack of entropy.
haveged on a VM works but the quality of its entropy is still debated.
A better solution is to use the paravirtualized VirtIO RNG device.

We already have it on the first boot; we are working to ensure it's always
there also after the engine imported the engine VM:
https://gerrit.ovirt.org/#/c/62334/


>
> But now the question arises, in what cases can be a helpful
> ovirt-warmup.service (https://github.com/geertj/
> ravstack/blob/master/share/ovirt-warmup.service) ?
>
> 03.10.2016, 12:32, "Sandro Bonazzola" :
> > This looks like not enough entropy on the host / guest running
> ovirt-engine.
> > If you're on Hosted Engine I suggest to install haveged (in EPEL repo)
> and run it to ensure enough entropy is available for the VM.
> > Adding Simone.
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GKJ3F43CAIRFSGZSJK3CDXXY3LGIF46B/


[ovirt-users] Re: 4.0 - 2nd node fails on deploy

2019-05-14 Thread Simone Tiraboschi
On Tue, Oct 4, 2016 at 5:22 PM, Jason Jeffrey  wrote:

> Hi,
>
>
>
> DCASTORXX is a hosts entry for dedicated  direct 10GB links (each private
> /28) between the x3 servers  i.e 1=> 2&3, 2=> 1&3, etc) planned to be used
> solely for storage.
>
>
>
> I,e
>
>
>
> 10.100.50.81dcasrv01
>
> 10.100.101.1dcastor01
>
> 10.100.50.82dcasrv02
>
> 10.100.101.2dcastor02
>
> 10.100.50.83dcasrv03
>
> 10.100.103.3dcastor03
>
>
>
> These were setup with the gluster commands
>
>
>
> · gluster volume create iso replica 3 arbiter 1
> dcastor01:/xpool/iso/brick   dcastor02:/xpool/iso/brick
> dcastor03:/xpool/iso/brick
>
> · gluster volume create export replica 3 arbiter 1
> dcastor02:/xpool/export/brick  dcastor03:/xpool/export/brick
> dcastor01:/xpool/export/brick
>
> · gluster volume create engine replica 3 arbiter 1
> dcastor01:/xpool/engine/brick dcastor02:/xpool/engine/brick
> dcastor03:/xpool/engine/brick
>
> · gluster volume create data replica 3 arbiter 1
> dcastor01:/xpool/data/brick  dcastor03:/xpool/data/brick
> dcastor02:/xpool/data/bricky
>
>
>
>
>
> So yes, DCASRV01 is the server (pri) and have local bricks access through
> DCASTOR01 interface
>
>
>
> Is the issue here not the incorrect soft link ?
>

No, this should be fine.

The issue is that periodically your gluster volume losses its server quorum
and become unavailable.
It happened more than once from your logs.

Can you please attach also gluster logs for that volume?


>
>
> lrwxrwxrwx. 1 vdsm kvm  132 Oct  3 17:27 hosted-engine.metadata ->
> /var/run/vdsm/storage/bbb70623-194a-46d2-a164-76a4876ecaaf/fd44dbf9-473a-
> 496a-9996-c8abe3278390/cee9440c-4eb8-453b-bc04-c47e6f9cbc93
>
> [root@dcasrv01 /]# ls -al /var/run/vdsm/storage/bbb70623-194a-46d2-a164-
> 76a4876ecaaf/
>
> ls: cannot access /var/run/vdsm/storage/bbb70623-194a-46d2-a164-76a4876ecaaf/:
> No such file or directory
>
> But the data does exist
>
> [root@dcasrv01 fd44dbf9-473a-496a-9996-c8abe3278390]# ls -al
>
> drwxr-xr-x. 2 vdsm kvm4096 Oct  3 17:17 .
>
> drwxr-xr-x. 6 vdsm kvm4096 Oct  3 17:17 ..
>
> -rw-rw. 2 vdsm kvm 1028096 Oct  3 20:48 cee9440c-4eb8-453b-bc04-
> c47e6f9cbc93
>
> -rw-rw. 2 vdsm kvm 1048576 Oct  3 17:17 cee9440c-4eb8-453b-bc04-
> c47e6f9cbc93.lease
>
> -rw-r--r--. 2 vdsm kvm 283 Oct  3 17:17 
> cee9440c-4eb8-453b-bc04-c47e6f9cbc93.meta
>
>
>
>
> Thanks
>
>
>
> Jason
>
>
>
>
>
>
>
> *From:* Simone Tiraboschi [mailto:stira...@redhat.com]
> *Sent:* 04 October 2016 14:40
>
> *To:* Jason Jeffrey 
> *Cc:* users 
> *Subject:* Re: [ovirt-users] 4.0 - 2nd node fails on deploy
>
>
>
>
>
>
>
> On Tue, Oct 4, 2016 at 10:51 AM, Simone Tiraboschi 
> wrote:
>
>
>
>
>
> On Mon, Oct 3, 2016 at 11:56 PM, Jason Jeffrey  wrote:
>
> Hi,
>
>
>
> Another problem has appeared, after rebooting the primary the VM will not
> start.
>
>
>
> Appears the symlink is broken between gluster mount ref and vdsm
>
>
>
> The first host was correctly deployed but it seas that you are facing some
> issue connecting the storage.
>
> Can you please attach vdsm logs and /var/log/messages from the first host?
>
>
>
> Thanks Jason,
>
> I suspect that your issue is related to this:
>
> Oct  4 18:24:39 dcasrv01 etc-glusterfs-glusterd.vol[2252]: [2016-10-04
> 17:24:39.522620] C [MSGID: 106002] [glusterd-server-quorum.c:351:
> glusterd_do_volume_quorum_action] 0-management: Server quorum lost for
> volume data. Stopping local bricks.
>
> Oct  4 18:24:39 dcasrv01 etc-glusterfs-glusterd.vol[2252]: [2016-10-04
> 17:24:39.523272] C [MSGID: 106002] [glusterd-server-quorum.c:351:
> glusterd_do_volume_quorum_action] 0-management: Server quorum lost for
> volume engine. Stopping local bricks.
>
>
>
> and for some time your gluster volume has been working.
>
>
>
> But then:
>
> Oct  4 19:02:09 dcasrv01 systemd: Started /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=dcastor02:dcastor03 dcastor01:engine
> /rhev/data-center/mnt/glusterSD/dcastor01:engine.
>
> Oct  4 19:02:09 dcasrv01 systemd: Starting /usr/bin/mount -t glusterfs -o
> backup-volfile-servers=dcastor02:dcastor03 dcastor01:engine
> /rhev/data-center/mnt/glusterSD/dcastor01:engine.
>
> Oct  4 19:02:11 dcasrv01 ovirt-ha-agent: /usr/lib/python2.7/site-
> packages/yajsonrpc/stomp.py:352: DeprecationWarning: Dispatcher.pending
> is deprecated. Use Dispatcher.socket.pending instead.
>
> Oct  4 19:02:11 dcasrv01 ovirt-ha-agent: pending = getat

[ovirt-users] Re: ovirt-ha-agent cpu usage

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 9:17 AM, gregor  wrote:

> Hi,
>
> did you found a solution or cause for this high CPU usage?
> I have installed the self hosted engine on another server and there is
> no VM running but ovirt-ha-agent uses heavily the CPU.
>

Yes, it's due to the fact that ovirt-ha-agent periodically reconnects over
json rpc and this is CPU intensive since the client has to parse the yaml
API specification each time it connects.
The issue is tracked here:
https://bugzilla.redhat.com/show_bug.cgi?id=1349829 - ovirt-ha-agent should
reuse json-rpc connections
but it depends on:
https://bugzilla.redhat.com/show_bug.cgi?id=1376843 - [RFE] Implement a
keep-alive with reconnect if needed logic for the python jsonrpc client


>
> cheers
> gregor
>
> On 08/08/16 15:09, Gianluca Cecchi wrote:
> > On Mon, Aug 8, 2016 at 1:03 PM, Roy Golan  > > wrote:
> >
> > Does the spikes correlates with info messages on extracting the ovf?
> >
> >
> >
> >
> >
> >
> > yes, it seems so and it happens every 14-15 seconds
> >
> > These are the lines I see scrolling in agent.log when I notice cpu
> > spikes in ovirt-ha-agent...
> >
> > MainThread::INFO::2016-08-08
> > 15:03:07,815::storage_server::212::ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer::(connect_storage_server)
> > Connecting storage server
> > MainThread::INFO::2016-08-08
> > 15:03:08,144::storage_server::220::ovirt_hosted_engine_ha.
> lib.storage_server.StorageServer::(connect_storage_server)
> > Refreshing the storage domain
> > MainThread::INFO::2016-08-08
> > 15:03:08,705::hosted_engine::685::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Preparing images
> > MainThread::INFO::2016-08-08
> > 15:03:08,705::image::126::ovirt_hosted_engine_ha.lib.
> image.Image::(prepare_images)
> > Preparing images
> > MainThread::INFO::2016-08-08
> > 15:03:09,653::hosted_engine::688::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(_initialize_storage_images)
> > Reloading vm.conf from the shared storage domain
> > MainThread::INFO::2016-08-08
> > 15:03:09,653::config::205::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Trying to get a fresher copy of vm configuration from the OVF_STORE
> > MainThread::INFO::2016-08-08
> > 15:03:09,843::ovf_store::100::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(scan)
> > Found OVF_STORE: imgUUID:223d26c2-1668-493c-a322-8054923d135f,
> > volUUID:108a362c-f5a9-440e-8817-1ed8a129afe8
> > MainThread::INFO::2016-08-08
> > 15:03:10,309::ovf_store::100::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(scan)
> > Found OVF_STORE: imgUUID:12ca2fc6-01f7-41ab-ab22-e75c822ac9b6,
> > volUUID:1a18851e-6858-401c-be6e-af14415034b5
> > MainThread::INFO::2016-08-08
> > 15:03:10,652::ovf_store::109::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > Extracting Engine VM OVF from the OVF_STORE
> > MainThread::INFO::2016-08-08
> > 15:03:10,974::ovf_store::116::ovirt_hosted_engine_ha.lib.
> ovf.ovf_store.OVFStore::(getEngineVMOVF)
> > OVF_STORE volume path:
> > /rhev/data-center/mnt/ovirt01.lutwyn.org:_SHE__DOMAIN/
> 31a9e9fd-8dcb-4475-aac4-09f897ee1b45/images/12ca2fc6-
> 01f7-41ab-ab22-e75c822ac9b6/1a18851e-6858-401c-be6e-af14415034b5
> > MainThread::INFO::2016-08-08
> > 15:03:11,494::config::225::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Found an OVF for HE VM, trying to convert
> > MainThread::INFO::2016-08-08
> > 15:03:11,497::config::230::ovirt_hosted_engine_ha.agent.
> hosted_engine.HostedEngine.config::(refresh_local_conf_file)
> > Got vm.conf from OVF_STORE
> > MainThread::INFO::2016-08-08
> > 15:03:11,675::hosted_engine::462::ovirt_hosted_engine_ha.
> agent.hosted_engine.HostedEngine::(start_monitoring)
> > Current state EngineUp (score: 3400)
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: Correct recovery procedure of the oVirt Hosted Engine 4.0

2019-05-14 Thread Simone Tiraboschi
On Wed, Oct 5, 2016 at 9:19 AM,  wrote:

> Hello oVirt guru`s!
>
>
> My Hosted Engine VM located on a dedicated LUN FC Storage.
>
> I do daily data backups (on NFS share) with the command:
>
> /usr/bin/engine-backup --mode=backup --scope=all --file=$BcpFileName.xz
> --log=$BcpFileName.log --archive-compressor=xz --files-compressor=None
>
> However, I don't know what would be the correct procedure to recover,
> because in different manuals outline the various steps.
>
> For example, there is information that I have to do configure postgresql
> (with password from file 
> files/etc/ovirt-engine/engine.conf.d/10-setup-database.conf)
> before restoring (engine-backup --mode=restore):
> https://www.ovirt.org/documentation/admin-guide/hosted-engine-backup-and-
> restore/


The recent releases of engine-backup can do that for you.
Adding Didi here.


>
>
> And at the same time, in another document, there are no such steps:
> https://access.redhat.com/documentation/en/red-hat-
> virtualization/4.0/single/self-hosted-engine-guide/#
> sect-Restoring_SHE_bkup
>
> What should be the correct procedure for the recovery of Hosted Engine 4.0
> ?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XYLODIINR5YCLKBWRYPPXMPNKSV7D5YO/


[ovirt-users] Re: 4.0 - 2nd node fails on deploy

2019-05-14 Thread Simone Tiraboschi
On Tue, Oct 4, 2016 at 10:51 AM, Simone Tiraboschi 
wrote:

>
>
> On Mon, Oct 3, 2016 at 11:56 PM, Jason Jeffrey  wrote:
>
>> Hi,
>>
>>
>>
>> Another problem has appeared, after rebooting the primary the VM will not
>> start.
>>
>>
>>
>> Appears the symlink is broken between gluster mount ref and vdsm
>>
>
> The first host was correctly deployed but it seas that you are facing some
> issue connecting the storage.
> Can you please attach vdsm logs and /var/log/messages from the first host?
>

Thanks Jason,
I suspect that your issue is related to this:
Oct  4 18:24:39 dcasrv01 etc-glusterfs-glusterd.vol[2252]: [2016-10-04
17:24:39.522620] C [MSGID: 106002]
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
0-management: Server quorum lost for volume data. Stopping local bricks.
Oct  4 18:24:39 dcasrv01 etc-glusterfs-glusterd.vol[2252]: [2016-10-04
17:24:39.523272] C [MSGID: 106002]
[glusterd-server-quorum.c:351:glusterd_do_volume_quorum_action]
0-management: Server quorum lost for volume engine. Stopping local bricks.

and for some time your gluster volume has been working.

But then:
Oct  4 19:02:09 dcasrv01 systemd: Started /usr/bin/mount -t glusterfs -o
backup-volfile-servers=dcastor02:dcastor03 dcastor01:engine
/rhev/data-center/mnt/glusterSD/dcastor01:engine.
Oct  4 19:02:09 dcasrv01 systemd: Starting /usr/bin/mount -t glusterfs -o
backup-volfile-servers=dcastor02:dcastor03 dcastor01:engine
/rhev/data-center/mnt/glusterSD/dcastor01:engine.
Oct  4 19:02:11 dcasrv01 ovirt-ha-agent:
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
Oct  4 19:02:11 dcasrv01 ovirt-ha-agent: pending = getattr(dispatcher,
'pending', lambda: 0)
Oct  4 19:02:11 dcasrv01 ovirt-ha-agent:
/usr/lib/python2.7/site-packages/yajsonrpc/stomp.py:352:
DeprecationWarning: Dispatcher.pending is deprecated. Use
Dispatcher.socket.pending instead.
Oct  4 19:02:11 dcasrv01 ovirt-ha-agent: pending = getattr(dispatcher,
'pending', lambda: 0)
Oct  4 19:02:11 dcasrv01 journal: vdsm vds.dispatcher ERROR SSL error
during reading data: unexpected eof
Oct  4 19:02:11 dcasrv01 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Error: 'Connection to
storage server failed' - trying to restart agent
Oct  4 19:02:11 dcasrv01 ovirt-ha-agent:
ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Error: 'Connection to
storage server failed' - trying to restart agent
Oct  4 19:02:12 dcasrv01 etc-glusterfs-glusterd.vol[2252]: [2016-10-04
18:02:12.384611] C [MSGID: 106003]
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume data. Starting local bricks.
Oct  4 19:02:12 dcasrv01 etc-glusterfs-glusterd.vol[2252]: [2016-10-04
18:02:12.388981] C [MSGID: 106003]
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume engine. Starting local
bricks.

And at that point VDSM started complaining that the hosted-engine-storage
domain doesn't exist anymore:
Oct  4 19:02:30 dcasrv01 journal: ovirt-ha-agent
ovirt_hosted_engine_ha.lib.image.Image ERROR Error fetching volumes list:
Storage domain does not exist: (u'bbb70623-194a-46d2-a164-76a4876ecaaf',)
Oct  4 19:02:30 dcasrv01 ovirt-ha-agent:
ERROR:ovirt_hosted_engine_ha.lib.image.Image:Error fetching volumes list:
Storage domain does not exist: (u'bbb70623-194a-46d2-a164-76a4876ecaaf',)

I see from the logs that the ovirt-ha-agent is trying to mount the
hosted-engine storage domain as:
/usr/bin/mount -t glusterfs -o backup-volfile-servers=dcastor02:dcastor03
dcastor01:engine /rhev/data-center/mnt/glusterSD/dcastor01:engine.

Pointing to dcastor01, dcastor02 and dcastor03 while your server is
dcasrv01.
But at the same time it seams that also dcasrv01 has local bricks for the
same engine volume.

So, is dcasrv01 just an alias fro dcastor01? if not you probably have some
issue with the configuration of your gluster volume.



>
>>
>> From broker.log
>>
>>
>>
>> Thread-169::ERROR::2016-10-04 22:44:16,189::storage_broker::138::
>> ovirt_hosted_engine_ha.broker.storage_broker.StorageBro
>> ker::(get_raw_stats_for_service_type) Failed to read metadata from
>> /rhev/data-center/mnt/glusterSD/dcastor01:engine/bbb70623-
>> 194a-46d2-a164-76a4876ecaaf/ha_agent/hosted-engine.metadata
>>
>>
>>
>> [root@dcasrv01 ovirt-hosted-engine-ha]# ls -al
>> /rhev/data-center/mnt/glusterSD/dcastor01\:engine/bbb70623-
>> 194a-46d2-a164-76a4876ecaaf/ha_agent/
>>
>> total 9
>>
>> drwxrwx---. 2 vdsm kvm 4096 Oct  3 17:27 .
>>
>> drwxr-xr-x. 5 vdsm kvm 4096 Oct  3 17:17 ..
>>
>> lrwxrwxrwx. 1 vdsm kvm  132 Oct  3 17:27 hosted-engine.lockspace ->
>> /var/run/vdsm/storage/

[ovirt-users] Re: botched 3.6 -> 4.0/1/2 upgrade, how to recover

2019-05-14 Thread Simone Tiraboschi
On Tue, May 14, 2019 at 2:33 PM  wrote:

> Hi,
>
> thanks to both - downgrading did not work, there is too much that needs to
> be removed and the old repos are deprecated and only available via eus. So,
> I'll setup the three node first and try to import the old domain there.
>
> > Yes, for this specific case the best option is to use latest RHV-H from
> 4.2
> time.
>
> Is 4.3 and RHEL hosts also a good option, or is there something specific
> in 4.2 that makes 3.6 domains better to import/attach?
>

In 4.3 we completely removed the support for 3.6 and 4.0 datacenter/cluster
levels as for:
https://bugzilla.redhat.com/1655115
<https://bugzilla.redhat.com/show_bug.cgi?id=1655115>

No way using 4.3 stuff (that's why we also removed --upgrade-appliance from
hosted-engine in 4.3).
Your only option now is to use 4.2 only repos with oVirt or RHV-H from 4.2
if on RHV.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WS7SMHFKALCL74WOMNK6FBYH2ZPUMTZZ/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KUQTOU52NPT4E4JITKHPO7MS34Y4J2QT/


[ovirt-users] Re: botched 3.6 -> 4.0/1/2 upgrade, how to recover

2019-05-14 Thread Simone Tiraboschi
WARN Not ready yet, ignoring
> event '|virt|VM_status|4f28af23-dd7e-413e-a331-1875f4dd18b3'
> args={'4f28af23-dd7e-413e-a331-1875f4dd18b3': {'status': 'Down',
> 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '0', 'type': 'vnc', 'port':
> '-1'}], 'hash': '-8231387692555228201', 'exitMessage': 'VM terminated with
> error', 'cpuUser': '0.00', 'monitorResponse': '0', 'vmId':
> '4f28af23-dd7e-413e-a331-1875f4dd18b3', 'exitReason': 1, 'cpuUsage':
> '0.00', 'elapsedTime': '8420', 'cpuSys': '0.00', 'timeOffset': '0',
> 'clientIp': '', 'exitCode': 1}}
> > May 14 10:57:22 hetzner-XX vdsm[8252]: WARN MOM not available.
> > May 14 10:57:22 hetzner-XX vdsm[8252]: WARN MOM not available, KSM
> stats will be missing.
> > May 14 10:58:47 hetzner-XX vdsm[8252]: WARN File:
> /var/lib/libvirt/qemu/channels/4f28af23-dd7e-413e-a331-1875f4dd18b3.com.redhat.rhevm.vdsm
> already removed
> > May 14 10:58:47 hetzner-XX vdsm[8252]: WARN File:
> /var/lib/libvirt/qemu/channels/4f28af23-dd7e-413e-a331-1875f4dd18b3.org.qemu.guest_agent.0
> already removed
> > May 14 11:00:52 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:01:54 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:02:06 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:05:18 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:05:29 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:08:41 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:08:53 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:12:04 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:12:16 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:15:28 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > May 14 11:15:40 hetzner-XX vdsm[8252]: ERROR ssl handshake:
> SSLError, address: :::192.168.111.10
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOD7T7DL55TR5LTDCAHA64464WQBV5QX/
>
>
>
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VVT3XFYQPSBQUEK76NEQOMSZTARQ4KJR/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQNQEB5AVBZARNAXZ5TQMKZIAZPP2OGD/


[ovirt-users] Re: Unable to deploy Hyperconverged Engine Node - v4.3.3

2019-05-13 Thread Simone Tiraboschi
ry-engine-Thread-1) [12746235] SSH error running
> command r...@sub.sub.domain.tld:'umask 0077;
> MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-XX)"; trap
> "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" >
> /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}" -x &&
> "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine
> DIALOG/customization=bool:True': RuntimeException: Unexpected error during
> execution: bash: /tmp/ovirt-pTVEEzlb8b/ovirt-host-deploy: Permission denied
> ./engine-logs-2019-05-13T12:26:20Z/ovirt-engine/engine.log:2019-05-13
> 12:34:40,406Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-1) [12746235] EVENT_ID:
> VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during
> installation of Host sub.sub.domain.tld: Unexpected error during execution:
> bash: /tmp/ovirt-pTVEEzlb8b/ovirt-host-deploy: Permission denied
>
> Could that be the cause and how can I fix it? What else do you guys need?
>

Can you please share host-deploy logs? They are where you got engine.log in
host-deploy subdir.


>
> Thanks in advance, Martin
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FVV73LOZQ4U3EEDFULL6Q7OOHHNQRJQV/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KCRLE2UUSJUR5FOE42MSMWJVO5Z5YP6C/


[ovirt-users] Re: OvfUpdateIntervalInMinutes restored to original value too early?

2019-05-10 Thread Simone Tiraboschi
, u'stdout_lines': [],
> u'stderr': u'20+0 records in\n20+0 records out\n10240 bytes (10 kB) copied,
> 0.000164261 s, 62.3 MB/s\ntar: 1e74609b-51e1-45c8-9106-2596ee59ba3a.ovf:
> Not found in archive\ntar: Exiting with failure status due to previous
> errors', u'_ansible_no_log': False}
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HJ272VCIHH5NIVDHJWP4HRQLV7BYQJHI/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives:


[ovirt-users] Re: Engine restore errors out on "Wait for OVF_STORE disk content"

2019-05-08 Thread Simone Tiraboschi
t\n10240 bytes (10 kB) copied,
> 0.000140541 s, 72.9 MB/s\ntar: ebb09b0e-2d03-40f0-8fa4-c40b18612a54.ovf:
> Not found in archive\ntar: Exiting with failure status due to previous
> errors', u'_ansible_no_log': False}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
>
> I tried twice. Same result. Should I retry?
>

We had a bug about that in the past:
https://bugzilla.redhat.com/1644748
<https://bugzilla.redhat.com/show_bug.cgi?id=1644748>
but it's reported as CLOSED CURRENTRELEASE.

Can I ask which versions of ovirt-hosted-engine-setup,
ovirt-ansible-hosted-engine-setup
and ovirt-engine-appliance are you using?

I see that you sent more than one email in the last days and in general all
the issue you are reporting are due to timeouts/race conditions.
Can you please provide some more info about your storage configuration?


>
> Is it safe to use the local hosted engine for starting stopping vms? I'm
> kind of headless for some days :-)
>
> Best regards.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UNBCYRKXM24UW3GRKH3WUDOCFWR6CWM6/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWMAHQV5Z374FYBU4CMPDKO4VURBITXL/


[ovirt-users] Re: silent install failing in 4.3.3.5-1 due to question on fqdn of host

2019-05-07 Thread Simone Tiraboschi
On Tue, May 7, 2019 at 12:29 AM Brian Kircher 
wrote:

> Thanks Simone,
>
>
>
> I’m using the ansible code that is packaged with the rpm packages as this
> is a fully offline development deployment without access to our ansible
> server or the internet in general.  This also has the added benefit of
> using the exact ansible roles/plays that were packaged with the release at
> a single point in time.
>

Yes, please notice that now ovirt-hosted-engine-setup rpm depends on
ovirt-ansible-hosted-engine-setup
one which installs the role.
All the deployment logic is inside ovirt-ansible-hosted-engine-setup which
is packaged as an rpm while ovirt-hosted-engine-setup contains the CLI
interactive front-end.
If you need a fully unattended setup with no interactive validation I'd
suggest to simply consume the ansible role.


> That did the trick though.  Thanks for the assist.  It would be nice to be
> able to just accept the default and continue though.  A bit surprised these
> new options don’t show up in the generated answer file either.
>
>
>
> Brian
>
>
>
> *From:* Simone Tiraboschi 
> *Sent:* Thursday, May 2, 2019 2:09 AM
> *To:* Brian Kircher 
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] silent install failing in 4.3.3.5-1 due to
> question on fqdn of host
>
>
>
>
>
>
>
> On Wed, May 1, 2019 at 10:24 PM Brian Kircher 
> wrote:
>
> Anyone happen to know if I can get the following into an answer file for a
> silent deployment? It’s asking to confirm or change from the fqdn of the
> host doing the HE install. Default is fine, but I have to connect in to the
> screen to answer this before it will continue. Default hostname changed to
> generic in below log.  Forward and reverse are both working via dns.
>
>
>
> *QUESTION/1/OVESETUP_NETWORK_FQDN_first_HE*
>
>
>
>
>
> *019-04-30 18:56:38,802-0500 * *DEBUG otopi.context
> context.dumpEnvironment:745 ENVIRONMENT DUMP - END*
>
> *2019-04-30 18:56:38,804-0500 * *DEBUG otopi.context
> context._executeMethod:127 Stage validation METHOD *
> *otopi.plugins.gr_he_common.network*
> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fotopi.plugins.gr_he_common.network=E,1,siRW2psC9DdSEiwsO5T4ZEs6G4uXngPkdMRJ0nCnqfVZSqTiKuGrUM8YUWKzdbRpQmgo7A0lcXHUm3FR6ZODCre-9H_JTRvIACUsn5jL=1>
> *.bridge.Plugin._validate_hostname_first_host*
>
> *2019-04-30 18:56:38,806-0500 * *DEBUG *
> *otopi.plugins.gr_he_common.network*
> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fotopi.plugins.gr_he_common.network=E,1,9WNMl6ufNdWeYnsjjLazjS4PikywT_5DNMjjXeVyJeDYgj0ee43ualnTRR-al-I12W4DcnEXrrQVG1VBvGBOqkrf_wf7gTaPXkbyal67pW2_LfZTZz5G=1>
>  *.bridge
> dialog.queryEnvKey:90 queryEnvKey called for key
> OVEHOSTED_NETWORK/host_name*
>
> *2019-04-30 18:56:38,806-0500 * *DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:159 query OVESETUP_NETWORK_FQDN_first_HE*
>
>
>
> Hi,
> you can use OVEHOSTED_NETWORK/host_name
>
> Please notice that now if you need a fully unassisted deployment you can
> directly trigger ovirt.hosted_engine_setup ansible role.
> You can find its documentation here:
> https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/README.md
>
>
>
>
>
> *2019-04-30 18:56:38,807-0500 * *DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND Please provide the hostname of this host
> on the management network [* *host01.domain.com*
> <https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fhost01.domain.com=E,1,3LQjBih-BwXZEMZNBOr7a4EsclexQxmUQzHPxpSpMNcpnXGr9VwEzA1qc7mB19qAWsqrxY0wLg7HcjgLv0u4FXsn-m4os8D9pY7NzlFeW6jKH1V-CLgAk1LqPQ,,=1>
> *]:*
>
>
>
> Version 4.3.3.5-1.el7
>
>
>
> Thanks,
>
>
>
> Brian
>
>
> --
>
>
>
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.ovirt.org%2fsite%2fprivacy-policy%2f=E,1,Tuqg7F_Eli0lsK2itxUREUaU5oRcUdapn7oCwZ_2jrJTwTB2Wi2QdUiYbmyveY_W1zKcBcczU_a8lViekwJ3H

[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-05-06 Thread Simone Tiraboschi
On Fri, May 3, 2019 at 8:14 PM Todd Barton 
wrote:

> Simone,
>
> It appears 192.168.122.13 stops routing correctly during the final stage
> of deployment.  After a failure of final stage, I can restart the
> hosted-engine VM from the cockpit and I can ping 192.168.122.13 from the
> Host again.  If I retry the final stage of deployment again, 192.168.122.13
> stop routing correctly from Host during that process.  Below are two ping
> commands...the 1st one is after deploy failure (screen shot previous email)
> and the second one is after force-restarting the hosted-engine VM.
>
> [root@ovirt-dr-standalone ~]# ping 192.168.122.13
> PING 192.168.122.13 (192.168.122.13) 56(84) bytes of data.
> From 192.168.122.1 icmp_seq=21 Destination Host Unreachable
> From 192.168.122.1 icmp_seq=22 Destination Host Unreachable
> From 192.168.122.1 icmp_seq=23 Destination Host Unreachable
> From 192.168.122.1 icmp_seq=24 Destination Host Unreachable
> From 192.168.122.1 icmp_seq=25 Destination Host Unreachable
> From 192.168.122.1 icmp_seq=26 Destination Host Unreachable
> From 192.168.122.1 icmp_seq=27 Destination Host Unreachable
> ^C
> --- 192.168.122.13 ping statistics ---
> 40 packets transmitted, 0 received, +7 errors, 100% packet loss, time
> 39041ms pipe 4
>
> [root@ovirt-dr-standalone ~]# ping 192.168.122.13
> PING 192.168.122.13 (192.168.122.13) 56(84) bytes of data.
> 64 bytes from 192.168.122.13: icmp_seq=2 ttl=64 time=0.560 ms
> 64 bytes from 192.168.122.13: icmp_seq=3 ttl=64 time=0.592 ms
> 64 bytes from 192.168.122.13: icmp_seq=4 ttl=64 time=0.345 ms
> 64 bytes from 192.168.122.13: icmp_seq=5 ttl=64 time=0.265 ms
> 64 bytes from 192.168.122.13: icmp_seq=6 ttl=64 time=0.374 ms
> 64 bytes from 192.168.122.13: icmp_seq=7 ttl=64 time=0.390 ms
> 64 bytes from 192.168.122.13: icmp_seq=8 ttl=64 time=0.635 ms
> 64 bytes from 192.168.122.13: icmp_seq=9 ttl=64 time=0.466 ms
> 64 bytes from 192.168.122.13: icmp_seq=10 ttl=64 time=0.376 ms
> 64 bytes from 192.168.122.13: icmp_seq=11 ttl=64 time=0.435 ms
> 64 bytes from 192.168.122.13: icmp_seq=12 ttl=64 time=0.567 ms
> 64 bytes from 192.168.122.13: icmp_seq=13 ttl=64 time=0.442 ms
> 64 bytes from 192.168.122.13: icmp_seq=14 ttl=64 time=0.402 ms
> ^C
> --- 192.168.122.13 ping statistics ---
> 14 packets transmitted, 13 received, 7% packet loss, time 13000ms rtt
> min/avg/max/mdev = 0.265/0.449/0.635/0.108 ms
>
>
> This appears to roughly be the same issue as preparing the vm...the
> network setup goes "off the reservation" when the deployment is making
> changes.  Maybe this is something caused by the virtualization setup, but
> I've read about others doing ovirt  hosts in VMs (like
> https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage.html
> ).
>

I tried reproducing this with oVirt node nested over KVM and everything
worked as expected for me.
Honestly I'm not hyper-v expert but I'd suggest to try changing something
(not sure exactly what) about the network definition on hyper-v side.



>
> Any suggestions?  I'm getting to the point where I may need to throw in
> the towel on this setup, but it would be greatly advantageous to have a VM
> lab so I can test changes/upgrades.  I would love to find a way to make
> this work.
>
> *Todd Barton*
>
>
>
>  On Fri, 03 May 2019 12:04:47 -0400 *Simone Tiraboschi
> >* wrote 
>
>
>
> On Fri, May 3, 2019 at 5:27 PM Todd Barton <
> tcbar...@ipvoicedatasystems.com> wrote:
>
>
> Simone/Dominik,
>
> Double reply below and more info with latest attempt.
>
> ---
>
> Simone...answers to your questions, using CAPS to make my responses easier
> to see/read.
>
> "If I correctly understood you environment you have:
> - A pfsense software firewall/router on 10.1.1.1
> - Your host on 10.1.1.61
> - You are accessing cockpit from a browser running on a machine on
> 10.1.1.101 on the same subnet"
>
> YES
>
> "And the issue is that once the engine created the management bridge, your
> client machine on 10.1.1.101 wasn't able anymore to reach your host on
> 10.1.1.61. Am I right?"
>
> YES
>
> "In this case the default gateway or other routes should't be an issue
> since your client is inside the same subnet."
>
> CORRECT, IMO
>
> "Do you think we are loosing some piece of your network configuration
> creating the management bridge such as a custom MTU or a VLAN id or
> something like that?"
>
> NO CUSTOM SETUP HERE...RUNNING A PLAIN/BASIC NETWORK
>
> "Do you think pfsense can start blocking/dropping the traffic for any
> reason?"
>
> NO, ONLY USING PFSENSE TO PROVIDE DHCP AND DNS IN TEST/LAB ENVIRONMENT.

[ovirt-users] Re: Problem with restoring engine

2019-05-06 Thread Simone Tiraboschi
On Sun, May 5, 2019 at 9:13 PM Andreas Elvers <
andreas.elvers+ovirtfo...@solutions.work> wrote:

> Hello today I tried to migrate the hosted engine from our Default
> Datacenter (NFS) to our Ceph Datacenter. The deployment worked with the
> automatic  "hosted-engine --deploy --restore-from-file=backup/file_name"
> command. Perfect.
>
> Only thing is: I messed up with the cluster name. The name should be
> Luise01 but I entered Luise1. Duh...
>
> Now I want to bring the engine back to the Default Datacenter. Easy thing.
> Just repeat the same steps again.
>
> 1. Enable global ha maintenenace
> 2. Stop and disable the engine
> 3. create the engine backup
> 4 ... continue with all the steps from chapter 13.1.8 RHEV Docs 4.3 Beta.
>
> Everything looked great. The ansible playbook was running, then asking for
> the storage domain. I entered the NFS path. It got registered, but then the
> ansible playbook  errors out with
>
> [ INFO  ] TASK [ovirt.hosted_engine_setup : Add VM]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[Cannot attach Virtual Disk. The target Data Center does not contain the
> Virtual Disk.]". HTTP response code is 409.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[Cannot attach Virtual
> Disk. The target Data Center does not contain the Virtual Disk.]\". HTTP
> response code is 409."}
> [ ERROR ] Failed to execute stage 'Closing up': Failed executing
> ansible-playbook
> [ INFO  ] Stage: Clean up
> [ INFO  ] Cleaning temporary resources
>
> I see that there is a bug report on
> https://bugzilla.redhat.com/show_bug.cgi?id=1649424
>
> Any idea how to get around this error ?
>

It seems barely reproducible: maybe a race condition somewhere.
I know it can look a bit clumsy but I suggest to simply give it another try.


>
> Additionally I now have a HostedEngineLocal (shut off) on that node... How
> do I remove it?
> engine-cleanup ?
>

You can even simply destroy that VM with virsh and run again the deployment.


>
> Have to get some sleep.
>
> best regards.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZFCLFWRN6XR6KMHMC63O7J37D5GNPVKZ/
>


-- 

Simone Tiraboschi

He / Him / His

Principal Software Engineer

Red Hat <https://www.redhat.com/>

stira...@redhat.com
@redhatjobs <https://twitter.com/redhatjobs>   redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://red.ht/sig>
<https://redhat.com/summit>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZPUFWKP3MU2V3UNO2CLVLBDYTVPRFJJ2/


[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-05-03 Thread Simone Tiraboschi
13.
Other machines are not required to communicate with the engine during the
deployment so we are not routing 192.168.122.1/24 neither masquerading it
for NAT traversal.


>
> I've attached logs and ip info commands from Host as well as screen shots
> from cockpit including storage/final deployment error and hosted-engine
> basic networking info.
>
> Thanks,
>
> Todd B.
>
>
>
>
>
>
>
>
>
>  On Thu, 02 May 2019 04:51:34 -0400 *Dominik Holler
> >* wrote 
>
> On Thu, 2 May 2019 09:57:08 +0200
> Simone Tiraboschi  wrote:
>
> > On Thu, May 2, 2019 at 5:22 AM Todd Barton <
> tcbar...@ipvoicedatasystems.com>
> > wrote:
> >
> > > Didi,
> > >
> > > I was able to carve out some time to attempt the original basic setup
> > > again this evening. The result was similar to my original post. During
> HE
> > > deployment, in the process of waiting for the host to come up (cockpit
> > > message), the networking is disrupted while building the bridged
> network
> > > and the host becomes unreachable.
> > >
> > > In this state, I can't ping the host from external machine and the
> > > ping/nslookup is non-functional from within the host. Nslookup returns
> > > "connection time out; no servers could be reached". The networking
> appears
> > > to be completely down although various command make it appear
> operational.
> > >
> > > Upon rebooting the Host (the host locked up on reboot attempt and
> needed
> > > to be reset), the message appears "libvirt-guests is configured not to
> > > start any guests on boot". After the reboot, the cockpit becomes
> > > responsive again and loging-in displays the "This system is already
> > > registered ovirt-dr-he-standalone.ipvoicedatasystems.lan!" with a
> > > "Redeploy" button. Looking at the networking setup in cockpit, it
> appears
> > > the "ovritmgmt" network is setup, but the hosted engine did not
> complete
> > > deployment and startup. The /etc/host file still contains the
> temporary IP
> > > address used in deployment and a HostedEngineLocal is listed under
> virtual
> > > machines, but it is not running.
> > >
> > > Please advise with any help/input on why this is happening. *Your help
> > > is much appreciated.*
> > >
> > >
> > > Here are the settings and diagnostic info/logs.
> > >
> > > This is a single-host hyper-converged setup for lab testing.
> > >
> > > - Host behind pfsense firewall with gateway IP address 10.1.1.1/24.
> The
> > > Host machine and the machine accessing the cockpit from IP address
> > > 10.1.1.101 are the only devices on the subnet (other than the router).
> It
> > > really can't get any simpler.
> > >
> > > - Host setup with single nic eth0
> > > - Hostname is setup as fully FQDN on Host
> > > - Static IP setup on Host with gateway and DNS server set to 10.1.1.1
> > > - FQDNs confirmed resolvable on subnet via dns server at 10.1.1.1 in
> > > pfsense
> > > Host = ovirt-dr-standalone.ipvoicedatasystems.lan , IP = 10.1.1.61
> > > Hosted Engine VM = ovirt-dr-he-standalone.ipvoicedatasystems.lan ,
> > > IP = 10.1.1.60
> > >
> > > - Gluster portion of cockpit setup installed as expected without
> problems
> > >
> > >
> > Everything defined here looks OK to me.
> >
> >
> > > - Hosted-Engine cockpit deployment executed with settings in attached
> > > screen shots.
> > > - Hosted engine setup and vdsm logs are attached in zip before the
> reboot.
> > > - Other network info captured in text files included in zip.
> > > - Screen shot of post reboot network setup in cockpit.
> > >
> > >
> > According to VDSM logs
> >
> > setupNetworks got executed here:
> >
> > 2019-05-01 20:22:14,656-0400 INFO (jsonrpc/0) [api.network] START
> > setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, u'nic':
> > u'eth0', u'ipaddr': u'10.1.1.61', u'switch': u'legacy', u'mtu': 1500,
> > u'netmask': u'255.255.255.0', u'dhcpv6': False, u'STP': u'no',
> u'bridged':
> > u'true', u'gateway': u'10.1.1.1', u'defaultRoute': True}}, bondings={},
> > options={u'connectivityCheck': u'true', u'connectivityTimeout': 120,
> > u'commitOnSuccess': False}) from=:::192.168.122.13,47544,
> > flow_id=2e7d10f2 (api:48)
> >
> > and it successfully completed at:
> > 2019-05-01 20:22:22,904-0400 INF

[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-05-02 Thread Simone Tiraboschi
On Thu, May 2, 2019 at 9:57 AM Simone Tiraboschi 
wrote:

>
>
> On Thu, May 2, 2019 at 5:22 AM Todd Barton <
> tcbar...@ipvoicedatasystems.com> wrote:
>
>> Didi,
>>
>> I was able to carve out some time to attempt the original basic setup
>> again this evening.  The result was similar to my original post.  During HE
>> deployment, in the process of waiting for the host to come up (cockpit
>> message), the networking is disrupted while building the bridged network
>> and the host becomes unreachable.
>>
>> In this state, I can't ping the host from external machine and the
>> ping/nslookup is non-functional from within the host.  Nslookup returns
>> "connection time out; no servers could be reached".  The networking appears
>> to be completely down although various command make it appear operational.
>>
>> Upon rebooting the Host (the host locked up on reboot attempt and needed
>> to be reset), the message appears "libvirt-guests is configured not to
>> start any guests on boot".  After the reboot, the cockpit becomes
>> responsive again and loging-in displays the "This system is already
>> registered ovirt-dr-he-standalone.ipvoicedatasystems.lan!" with a
>> "Redeploy" button.  Looking at the networking setup in cockpit, it appears
>> the "ovritmgmt" network is setup, but the hosted engine did not complete
>> deployment and startup.  The /etc/host file still contains the temporary IP
>> address used in deployment and a HostedEngineLocal is listed under virtual
>> machines, but it is not running.
>>
>> Please advise with any help/input on why this is happening.  *Your help
>> is much appreciated.*
>>
>>
>> Here are the settings and diagnostic info/logs.
>>
>> This is a single-host hyper-converged setup for lab testing.
>>
>> - Host behind pfsense firewall with gateway IP address 10.1.1.1/24.  The
>> Host machine and the machine accessing the cockpit from IP address
>> 10.1.1.101 are the only devices on the subnet (other than the router).  It
>> really can't get any simpler.
>>
>> - Host setup with single nic eth0
>> - Hostname is setup as fully FQDN on Host
>> - Static IP setup on Host with gateway and DNS server set to 10.1.1.1
>> - FQDNs confirmed resolvable on subnet via dns server at 10.1.1.1 in
>> pfsense
>>   Host = ovirt-dr-standalone.ipvoicedatasystems.lan , IP = 10.1.1.61
>>   Hosted Engine VM = ovirt-dr-he-standalone.ipvoicedatasystems.lan ,
>> IP = 10.1.1.60
>>
>> - Gluster portion of cockpit setup installed as expected without problems
>>
>>
> Everything defined here looks OK to me.
>
>
>> - Hosted-Engine cockpit deployment executed with settings in attached
>> screen shots.
>> - Hosted engine setup and vdsm logs are attached in zip before the reboot.
>> - Other network info captured in text files included in zip.
>> - Screen shot of post reboot network setup in cockpit.
>>
>>
> According to VDSM logs
>
> setupNetworks got executed here:
>
> 2019-05-01 20:22:14,656-0400 INFO  (jsonrpc/0) [api.network] START
> setupNetworks(networks={u'ovirtmgmt': {u'ipv6autoconf': True, u'nic':
> u'eth0', u'ipaddr': u'10.1.1.61', u'switch': u'legacy', u'mtu': 1500,
> u'netmask': u'255.255.255.0', u'dhcpv6': False, u'STP': u'no', u'bridged':
> u'true', u'gateway': u'10.1.1.1', u'defaultRoute': True}}, bondings={},
> options={u'connectivityCheck': u'true', u'connectivityTimeout': 120,
> u'commitOnSuccess': False}) from=:::192.168.122.13,47544,
> flow_id=2e7d10f2 (api:48)
>
> and it successfully completed at:
> 2019-05-01 20:22:22,904-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
> call Host.confirmConnectivity succeeded in 0.00 seconds (__init__:312)
> 2019-05-01 20:22:22,916-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.confirmConnectivity succeeded in 0.00 seconds (__init__:312)
> 2019-05-01 20:22:22,917-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.confirmConnectivity succeeded in 0.00 seconds (__init__:312)
> 2019-05-01 20:22:23,469-0400 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call Host.confirmConnectivity succeeded in 0.00 seconds (__init__:312)
> 2019-05-01 20:22:23,583-0400 INFO  (jsonrpc/0) [api.network] FINISH
> setupNetworks return={'status': {'message': 'Done', 'code': 0}}
> from=:::192.168.122.13,47544, flow_id=2e7d10f2 (api:54)
> 2019-05-01 20:22:23,583-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Host.setupNetworks succeeded in 8.93 seconds (__init__:312)
> 2019-05-01 20:22:24,033-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
&

[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-05-02 Thread Simone Tiraboschi
y I tried the single nic setup, but the outcome seemed to
> be the same scenario.
> >
> > Honestly I've run through this setup so many times in the last week its
> all a blur. I started messing multiple nics in latest attempts to see if
> this was something specific I should do in a cockpit setup as one of the
> articles I read suggested multiple interfaces to separate traffic.
> >
> > My "production" 4.0 environment (currently a failed upgrade with a down
> host that I can't seem to get back online) is 3 host gluster on 4 bonded
> 1Gbps links. With the exception of the upgrade issue/failure, it has been
> rock-solid with good performance and I've only restarted hosts on upgrades
> in 4+ years. There are a few networking changes i would like to make in a
> rebuild, but I wanted to test various options before implementing. Getting
> a single nic environment was the initial goal to get started.
> >
> > I'm doing this testing in a virtualized setup with pfsense as the
> firewall/router and I can setup hosts/nics however I want. I will start
> over again with more straightforward setup and get more data on failure.
> Considering I can setup the environment how i want, what would be your
> recommended config for a single nic(or single bond) setup using cockpit?
> Static IPs with host file resolution, DHCP with mac specific IPs, etc.
>
> Much of such decisions is a matter of personal preferences,
> acquaintance with the relevant technologies and tooling you have
> around them, local needs/policies/mandates, existing infrastructure,
> etc.
>
> If you search the net, e.g. for "ovirt best practices" or "RHV best
> practices", you can find various articles etc. that can provide some
> good guidelines/ideas.
>
> I suggest to read around a bit, then spend some good time on planning,
> then carefully and systematically implement your design, verifying
> each step right after doing it. When you run into problems, tell us
> :-). Ideally, IMO, you should not give up on your design due to such
> problems and try workarounds, inferior (in your eyes) solutions, etc.,
> unless you manage to find existing open bugs that describe your
> problem and you decide you can't want until they are solved. Instead,
> try to fix problems, perhaps with the list members' help.
>
> I realize spending a week on what is in your perception a simple,
> straightforward task, does not leave you in the best mood for such a
> methodical next attempt. Perhaps first take a break and do something
> else :-), then start from a clean and fresh hardware/software
> environment and mind.
>
> Good luck and best regards,
>
> >
> > Thank you,
> >
> > Todd Barton
> >
> >
> >
> >
> >  On Tue, 30 Apr 2019 05:20:04 -0400 Simone Tiraboschi <
> stira...@redhat.com> wrote 
> >
> >
> >
> > On Tue, Apr 30, 2019 at 9:50 AM Yedidyah Bar David 
> wrote:
> >
> > On Tue, Apr 30, 2019 at 5:09 AM Todd Barton
> >  wrote:
> > >
> > > I've having to rebuild an environment that started back in the early
> 3.x days. A lot has changed and I'm attempting to use the Ovirt Node based
> setup to build a new environment, but I can't get through the hosted engine
> deployment process via the cockpit (I've done command line as well). I've
> tried static DHCP address and static IPs as well as confirmed I have
> resolvable host-names. This is a test environment so I can work through any
> issues in deployment.
> > >
> > > When the cockpit is displaying the waiting for host to come up task,
> the cockpit gets disconnected. It appears to a happen when the bridge
> network is setup. At that point, the deployment is messed up and I can't
> return to the cockpit. I've tried this with one or two nic/interfaces and
> tried every permutation of static and dynamic ip addresses. I've spent a
> week trying different setups and I've got to be doing something stupid.
> > >
> > > Attached is a screen capture of the resulting IP info after my latest
> try failing. I used two nics, one for the gluster and bridge network and
> the other for the ovirt cockpit access. I can't access cockpit on either ip
> address after the failure.
> > >
> > > I've attempted this setup as both a single host hyper-converged setup
> and a three host hyper-converged environment...same issue in both.
> > >
> > > Can someone please help me or give me some thoughts on what is wrong?
> >
> > There are two parts here: 1. Fix it so that you can continue (and so
> > that if it happens to you on production, you know what to do) 2. Fix
> > the code so that it does not happen 

[ovirt-users] Re: silent install failing in 4.3.3.5-1 due to question on fqdn of host

2019-05-02 Thread Simone Tiraboschi
On Wed, May 1, 2019 at 10:24 PM Brian Kircher 
wrote:

> Anyone happen to know if I can get the following into an answer file for a
> silent deployment? It’s asking to confirm or change from the fqdn of the
> host doing the HE install. Default is fine, but I have to connect in to the
> screen to answer this before it will continue. Default hostname changed to
> generic in below log.  Forward and reverse are both working via dns.
>
>
>
> *QUESTION/1/OVESETUP_NETWORK_FQDN_first_HE*
>
>
>
>
>
> *019-04-30 18:56:38,802-0500 * *DEBUG otopi.context
> context.dumpEnvironment:745 ENVIRONMENT DUMP - END*
>
> *2019-04-30 18:56:38,804-0500 * *DEBUG otopi.context
> context._executeMethod:127 Stage validation METHOD *
> *otopi.plugins.gr_he_common.network*
> 
> *.bridge.Plugin._validate_hostname_first_host*
>
> *2019-04-30 18:56:38,806-0500 * *DEBUG *
> *otopi.plugins.gr_he_common.network*
>  *.bridge
> dialog.queryEnvKey:90 queryEnvKey called for key
> OVEHOSTED_NETWORK/host_name*
>
> *2019-04-30 18:56:38,806-0500 * *DEBUG otopi.plugins.otopi.dialog.human
> human.queryString:159 query OVESETUP_NETWORK_FQDN_first_HE*
>

Hi,
you can use OVEHOSTED_NETWORK/host_name

Please notice that now if you need a fully unassisted deployment you can
directly trigger ovirt.hosted_engine_setup ansible role.
You can find its documentation here:
https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/blob/master/README.md


>
> *2019-04-30 18:56:38,807-0500 * *DEBUG otopi.plugins.otopi.dialog.human
> dialog.__logString:204 DIALOG:SEND Please provide the hostname of this host
> on the management network [* *host01.domain.com*
>  *]:*
>
>
>
> Version 4.3.3.5-1.el7
>
>
>
> Thanks,
>
>
>
> Brian
>
>
> --
>
>
> This email and any files transmitted with it are confidential and are
> intended solely for the use of the individual or entity to whom they are
> addressed. If you are not the original recipient or the person responsible
> for delivering the email to the intended recipient, be advised that you
> have received this email in error, and that any use, dissemination,
> forwarding, printing, or copying of this email is strictly prohibited. If
> you received this email in error, please immediately notify the sender and
> delete the original.
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y3JPBUXPXSXHIZTZFRBO6JV2FHAWUBRA/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CXGB2WSTFTDL22L3W4CZ7FF3NMTLC4TO/


[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-04-30 Thread Simone Tiraboschi
On Tue, Apr 30, 2019 at 2:55 PM Ralf Schenk  wrote:

> Hello,
>
> that is definitely not my problem. Did a complete new deployment (after
> rebooting host)
>
> Before deploying on my storage:
> root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
> total 17
> drwxrwxr-x 2 vdsm vdsm 2 Apr 30 13:53 .
> drwxr-xr-x 8 root root 8 Apr  2 18:02 ..
>
> While deploying in late stage:
> root@storage-rx:/srv/nfs/ovirt/hosted_storage# ls -al
> total 18
> drwxrwxr-x 3 vdsm vdsm 4 Apr 30 14:51 .
> drwxr-xr-x 8 root root 8 Apr  2 18:02 ..
> drwxr-xr-x 4 vdsm vdsm 4 Apr 30 14:51 d26e4a31-8d73-449d-bebc-f2ce7a979e5d
> -rwxr-xr-x 1 vdsm vdsm 0 Apr 30 14:51 __DIRECT_IO_TEST__
>
> Immediately the error occurs in GUI:
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
> is 400."}
>

Can you please share engine.log and vdsm.log?


>
>
> Am 30.04.2019 um 13:48 schrieb Simone Tiraboschi:
>
>
>
> On Tue, Apr 30, 2019 at 1:35 PM Ralf Schenk  wrote:
>
>> Hello,
>>
>> I'm deploying HostedEngine to a NFS Storage. HostedEngineLocal ist setup
>> and running already. But Step 4 (Moving to hosted_storage Domain on NFS)
>> fails. The Host ist Node-NG 4.3.3.1 based.
>>
>> The intended NFS Domain gets mounted in the host but activation (I think
>> via EngineAPI fails):
>>
>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
>> "[]". HTTP response code is 400.
>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
>> "Fault reason is \"Operation Failed\". Fault detail is \"[]\". HTTP
>> response code is 400."}
>>
>> mount in host shows:
>>
>> storage.rxmgmt.databay.de:/ovirt/hosted_storage on
>> /rhev/data-center/mnt/storage.rxmgmt.databay.de:_ovirt_hosted__storage
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.16.252.231,local_lock=none,addr=172.16.252.3)
>>
>> I also sshd into the locally running engine vi 192.168.122.XX and the VM
>> can mount the storage domain, too:
>>
>> [root@engine01 ~]# mount storage.rxmgmt.databay.de:/ovirt/hosted_storage
>> /mnt/ -o vers=4.1
>> [root@engine01 ~]# mount | grep nfs
>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>> storage.rxmgmt.databay.de:/ovirt/hosted_storage on /mnt type nfs4
>> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.71,local_lock=none,addr=172.16.252.3)
>> [root@engine01 ~]# ls -al /mnt/
>> total 18
>> drwxrwxr-x.  3 vdsm kvm4 Apr 30 12:59 .
>> dr-xr-xr-x. 17 root root 224 Apr 16 14:31 ..
>> drwxr-xr-x.  4 vdsm kvm4 Apr 30 12:40
>> 4dc42146-b3fb-47ec-bf06-8d9bf7cdf893
>> -rwxr-xr-x.  1 vdsm kvm0 Apr 30 12:55 __DIRECT_IO_TEST__
>>
>> Anything I can do ?
>>
>
> 99% that folder was dirty (it already contained something) when you
> started the deployment.
> I can only suggest to clean that folder and start from scratch.
>
>
>> Log-Extract of ovirt-hosted-engine-setup-ansible-create_storage_domain
>> included.
>>
>>
>>
>> --
>>
>>
>> *Ralf Schenk*
>> fon +49 (0) 24 05 / 40 83 70
>> fax +49 (0) 24 05 / 40 83 759
>> mail *r...@databay.de* 
>>
>> *Databay AG*
>> Jens-Otto-Krag-Straße 11
>> D-52146 Würselen
>> *www.databay.de* <http://www.databay.de>
>>
>> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
>> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
>> Philipp Hermanns
>> Aufsichtsratsvorsitzender: Wilhelm Dohmen
>> --
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37WY4NUSJYMA7PMZWYSU5KCMFKVBNTHS/
>>
&

[ovirt-users] Re: HostedEngine Deployment fails activating NFS Storage Domain hosted_storage via GUI Step 4

2019-04-30 Thread Simone Tiraboschi
On Tue, Apr 30, 2019 at 1:35 PM Ralf Schenk  wrote:

> Hello,
>
> I'm deploying HostedEngine to a NFS Storage. HostedEngineLocal ist setup
> and running already. But Step 4 (Moving to hosted_storage Domain on NFS)
> fails. The Host ist Node-NG 4.3.3.1 based.
>
> The intended NFS Domain gets mounted in the host but activation (I think
> via EngineAPI fails):
>
> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[]". HTTP response code is 400.
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
> reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
> is 400."}
>
> mount in host shows:
>
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on
> /rhev/data-center/mnt/storage.rxmgmt.databay.de:_ovirt_hosted__storage
> type nfs4
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.16.252.231,local_lock=none,addr=172.16.252.3)
>
> I also sshd into the locally running engine vi 192.168.122.XX and the VM
> can mount the storage domain, too:
>
> [root@engine01 ~]# mount storage.rxmgmt.databay.de:/ovirt/hosted_storage
> /mnt/ -o vers=4.1
> [root@engine01 ~]# mount | grep nfs
> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> storage.rxmgmt.databay.de:/ovirt/hosted_storage on /mnt type nfs4
> (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.71,local_lock=none,addr=172.16.252.3)
> [root@engine01 ~]# ls -al /mnt/
> total 18
> drwxrwxr-x.  3 vdsm kvm4 Apr 30 12:59 .
> dr-xr-xr-x. 17 root root 224 Apr 16 14:31 ..
> drwxr-xr-x.  4 vdsm kvm4 Apr 30 12:40
> 4dc42146-b3fb-47ec-bf06-8d9bf7cdf893
> -rwxr-xr-x.  1 vdsm kvm0 Apr 30 12:55 __DIRECT_IO_TEST__
>
> Anything I can do ?
>

99% that folder was dirty (it already contained something) when you started
the deployment.
I can only suggest to clean that folder and start from scratch.


> Log-Extract of ovirt-hosted-engine-setup-ansible-create_storage_domain
> included.
>
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/37WY4NUSJYMA7PMZWYSU5KCMFKVBNTHS/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MWMY3W3WBRAWSYT5UGFH2JQ4EEE64THT/


[ovirt-users] Re: Please Help, Ovirt Node Hosted Engine Deployment Problems 4.3.2

2019-04-30 Thread Simone Tiraboschi
On Tue, Apr 30, 2019 at 9:50 AM Yedidyah Bar David  wrote:

> On Tue, Apr 30, 2019 at 5:09 AM Todd Barton
>  wrote:
> >
> > I've having to rebuild an environment that started back in the early 3.x
> days.  A lot has changed and I'm attempting to use the Ovirt Node based
> setup to build a new environment, but I can't get through the hosted engine
> deployment process via the cockpit (I've done command line as well).  I've
> tried static DHCP address and static IPs as well as confirmed I have
> resolvable host-names.  This is a test environment so I can work through
> any issues in deployment.
> >
> > When the cockpit is displaying the waiting for host to come up task, the
> cockpit gets disconnected.  It appears to a happen when the bridge network
> is setup.  At that point, the deployment is messed up and I can't return to
> the cockpit.  I've tried this with one or two nic/interfaces and tried
> every permutation of static and dynamic ip addresses.  I've spent a week
> trying different setups and I've got to be doing something stupid.
> >
> > Attached is a screen capture of the resulting IP info after my latest
> try failing.  I used two nics, one for the gluster and bridge network and
> the other for the ovirt cockpit access.  I can't access cockpit on either
> ip address after the failure.
> >
> > I've attempted this setup as both a single host hyper-converged setup
> and a three host hyper-converged environment...same issue in both.
> >
> > Can someone please help me or give me some thoughts on what is wrong?
>
> There are two parts here: 1. Fix it so that you can continue (and so
> that if it happens to you on production, you know what to do) 2. Fix
> the code so that it does not happen again. They are not necessarily
> identical (or even very similar).
>
> At the point in time of taking the screen capture:
>
> 1. Did the ovirtmgmt bridge get the IP address of the intended nic? Which
> one?
>
> 2. Did you check routing? Default gateway, or perhaps you had/have
> specific other routes?
>
> 3. What nics are in the bridge? Can you check/share output of 'brctl show'?
>
> 4. Probably not related, just noting: You have there (currently on
> eth0 and on ovirtmgmt, perhaps you tried other combinations):
> 10.1.2.61/16 and 10.1.1.61/16 . It seems like you wanted two different
> subnets, but are actually using a single one. Perhaps you intended to
> use 10.1.2.61/24 and 10.1.1.61/24.
>

Good catch: the issue comes exactly form here!
Please see:
https://bugzilla.redhat.com/1694626

The issue happens when the user has two interfaces configured on the same
IP subnet, the default gateway is configured to be reached from one of the
two interfaces and the user chooses to create the management bridge on the
other one.
When the engine, adding the host, creates the management bridge it also
tries to configure the default gateway on the bridge and for some reason
this disrupt the external connectivity on the host and the the user is
going to loose it.

If you intend to use one interface for gluster and the other for the
management network I'd strongly suggest to use two distinct subnets having
the default gateway on the subnet you are going to use for the management
network.

If you want to use two interfaces for reliability reasons I'd strongly
suggest to create a bond of the two instead.

Please also notice that deploying a three host hyper-converged environment
over a single 1 gbps interface will be really penalizing in terms of
storage performances.
Each data has to be written on the host itself and on the two remote ones
so you are going to have 1000 mbps / 2 (external replicas ) / 8 (bit/bytes)
= a max of 62.5 MB/s sustained throughput shared between all the VMs and
this ignoring all the overheads.
In practice it will be much less ending in a barely usable environment.

I'd strongly suggest to move to a 10 gbps environment if possible, or to
bond a few 1 gbps nics for gluster.


5. Can you ping from/to these two addresses from/to some other machine
> on the network? Your laptop? The storage?
>
> 6. If possible, please check/share relevant logs, including (from the
> host) /var/log/vdsm/* and /var/log/ovirt-hosted-engine-setup/*.
>
> Thanks and best regards,
> --
> Didi
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIYWEUXPA25BK3K23MPBISRGZN76AWV3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Rebrandind Problems

2019-04-19 Thread Simone Tiraboschi
On Thu, Apr 18, 2019 at 7:31 PM  wrote:

> hello I made a rebranding of my ovirt 4.3.2, but something went wrong and
> I saved it from the original incorrectly without realizing it. Please I
> need the original "ovirt.brand" and "ovirt" directories when ovirt 4.3.2 is
> installed. Where can I get them to be able to restore them?
>

You can check what you need with something like:
rpm -qf /usr/share/ovirt-engine/brands/ovirt.brand


>
> Greetings
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2NDIUV56J5SVQ7WY57V3IJJAOOANAPVR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JWRBJFNSDZAERQ5OKNBOTNVQEGTWSOL3/


[ovirt-users] Re: Cant install oVirt hosted engine 4.3

2019-04-16 Thread Simone Tiraboschi
On Tue, Apr 16, 2019 at 5:21 PM  wrote:

> No, i only executed this commands:
> yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm -y
> yum install ovirt-hosted-engine-setup ovirt-engine-appliance -y
> after this i start installation using cockpit.
>
> Also i try second time in text mode:
> hosted-engine --deploy
> but have same results.
>

OK, understood it comes from here:
2019-04-15 12:21:29,642-0500 DEBUG
otopi.plugins.gr_he_common.network.bridge bridge._customization:149
{u'otopi_host_net': {u'changed': False, u'ansible_facts':
{u'otopi_host_net': [u'team0']}, u'_ansible_no_log': False}}
2019-04-15 12:21:29,644-0500 DEBUG otopi.plugins.otopi.dialog.human
human.queryString:159 query ovehosted_bridge_if
2019-04-15 12:21:29,644-0500 DEBUG otopi.plugins.otopi.dialog.human
dialog.__logString:204 DIALOG:SEND Please indicate a nic to
set ovirtmgmt bridge on: (team0) [team0]:
2019-04-15 12:21:32,289-0500 DEBUG otopi.context
context.dumpEnvironment:731 ENVIRONMENT DUMP - BEGIN
2019-04-15 12:21:32,289-0500 DEBUG otopi.context
context.dumpEnvironment:741 ENV OVEHOSTED_NETWORK/bridgeIf=str:'team0'
2019-04-15 12:21:32,289-0500 DEBUG otopi.context
context.dumpEnvironment:741 ENV QUESTION/1/ovehosted_bridge_if=str:'team0'

oVirt doesn't supported teamed interfaces but only bridged one and I guess
that your 'team0' is a team and not a bond.

Unfortunately, due to an ansible issue, we are not able to automatically
identify team network devices:
https://github.com/ansible/ansible/issues/43129

I'd suggest to reconfigure team0 as bond0 (please take care that also the
name is relevant) and try again.


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6YCSY5L6HQEP63NTOMUGMVEMGZQZQU7H/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BGXSWOKDWHTVBM66T5DJXJF3EABNAVNG/


[ovirt-users] Re: Cant install oVirt hosted engine 4.3

2019-04-16 Thread Simone Tiraboschi
Did you manually created a bridge named ovirtmgmt?

On Tue, Apr 16, 2019 at 5:11 PM  wrote:

> I found error here:
> https://imgur.com/a/Q6JjRZb
> https://imgur.com/a/JCoZOlM
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/QV7MA2WW2NKZMQ7X726LGMZUP3PQZO33/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IYAAB6XZ4S6WQSKEPEB2KVTDRRXZDWRT/


  1   2   3   4   5   6   7   8   9   10   >