Re: [ovirt-devel] [rhev-devel] [IMPORTANT] getting rid of our implementation of UUID generation in 4.2.1

2018-01-16 Thread Dafna Ron
Hi,

We are failing to run latest ds-OST job on missing package that I think is
related to this change:

http://pastebin.test.redhat.com/547234

Full log:

https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ovirt-system-tests_rhv-suite-master/254/consoleFull

Thanks,
Dafna


On Tue, Jan 16, 2018 at 2:50 PM, Eyal Edri  wrote:

> It looks like it failed all post merge engine jobs:
>
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
> artifacts-el7-x86_64/6408/
>
> On Tue, Jan 16, 2018 at 3:52 PM, Eli Mesika  wrote:
>
>> Hi Guys
>>
>> Please note that this patch was merged .
>> You should follow all steps described in the original email
>>
>> If you have any questions or problems, please contact me
>>
>> Eli
>>
>> On Tue, Dec 12, 2017 at 12:11 PM, Eli Mesika  wrote:
>>
>>> Hi
>>>
>>> We had decided to drop our UUID generation function from DB in 4.2.1 and
>>> use the one provided by PG (using an extension)
>>> Please note that once this patch [1] is merged , you will have to do the
>>> following steps in your env in order that it will continue working.
>>>
>>> 1) make sure that the postgresql-contrib is installed on your machine
>>> (yum/dnf install postgresql-contrib -y)
>>> 2) run the following command from psql prompt while logging in with a DB
>>> admin (postgres) user
>>>  a) DROP FUNCTION IF EXISTS uuid_generate_v1();
>>>  b) CREATE EXTENSION "uuid-ossp";'
>>> 3) validate from psql prompt by :
>>>
>>> # select *  from pg_available_extensions where name = 'uuid-ossp' and
>>> installed_version IS NOT NULL;
>>>
>>> You should get the following result
>>>
>>> -[ RECORD 1 ]-+
>>> name  | uuid-ossp
>>> default_version   | 1.0
>>> installed_version | 1.0
>>> comment   | generate universally unique identifiers (UUIDs)
>>>
>>> Please contact me for any questions or problems you encounter after this
>>> patch is merged to master.
>>>
>>> [1] https://gerrit.ovirt.org/#/c/84832/
>>>
>>>
>>> Thanks
>>>
>>> Eli Mesika
>>>
>>
>>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [rhev-devel] [IMPORTANT] getting rid of our implementation of UUID generation in 4.2.1

2018-01-16 Thread Eyal Edri
It looks like it failed all post merge engine jobs:

http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/6408/

On Tue, Jan 16, 2018 at 3:52 PM, Eli Mesika  wrote:

> Hi Guys
>
> Please note that this patch was merged .
> You should follow all steps described in the original email
>
> If you have any questions or problems, please contact me
>
> Eli
>
> On Tue, Dec 12, 2017 at 12:11 PM, Eli Mesika  wrote:
>
>> Hi
>>
>> We had decided to drop our UUID generation function from DB in 4.2.1 and
>> use the one provided by PG (using an extension)
>> Please note that once this patch [1] is merged , you will have to do the
>> following steps in your env in order that it will continue working.
>>
>> 1) make sure that the postgresql-contrib is installed on your machine
>> (yum/dnf install postgresql-contrib -y)
>> 2) run the following command from psql prompt while logging in with a DB
>> admin (postgres) user
>>  a) DROP FUNCTION IF EXISTS uuid_generate_v1();
>>  b) CREATE EXTENSION "uuid-ossp";'
>> 3) validate from psql prompt by :
>>
>> # select *  from pg_available_extensions where name = 'uuid-ossp' and
>> installed_version IS NOT NULL;
>>
>> You should get the following result
>>
>> -[ RECORD 1 ]-+
>> name  | uuid-ossp
>> default_version   | 1.0
>> installed_version | 1.0
>> comment   | generate universally unique identifiers (UUIDs)
>>
>> Please contact me for any questions or problems you encounter after this
>> patch is merged to master.
>>
>> [1] https://gerrit.ovirt.org/#/c/84832/
>>
>>
>> Thanks
>>
>> Eli Mesika
>>
>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [IMPORTANT] getting rid of our implementation of UUID generation in 4.2.1

2018-01-16 Thread Eli Mesika
Hi Guys

Please note that this patch was merged .
You should follow all steps described in the original email

If you have any questions or problems, please contact me

Eli

On Tue, Dec 12, 2017 at 12:11 PM, Eli Mesika  wrote:

> Hi
>
> We had decided to drop our UUID generation function from DB in 4.2.1 and
> use the one provided by PG (using an extension)
> Please note that once this patch [1] is merged , you will have to do the
> following steps in your env in order that it will continue working.
>
> 1) make sure that the postgresql-contrib is installed on your machine
> (yum/dnf install postgresql-contrib -y)
> 2) run the following command from psql prompt while logging in with a DB
> admin (postgres) user
>  a) DROP FUNCTION IF EXISTS uuid_generate_v1();
>  b) CREATE EXTENSION "uuid-ossp";'
> 3) validate from psql prompt by :
>
> # select *  from pg_available_extensions where name = 'uuid-ossp' and
> installed_version IS NOT NULL;
>
> You should get the following result
>
> -[ RECORD 1 ]-+
> name  | uuid-ossp
> default_version   | 1.0
> installed_version | 1.0
> comment   | generate universally unique identifiers (UUIDs)
>
> Please contact me for any questions or problems you encounter after this
> patch is merged to master.
>
> [1] https://gerrit.ovirt.org/#/c/84832/
>
>
> Thanks
>
> Eli Mesika
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ Jan 15th 2018 ] [ 006_migrations.migrate_vm ]

2018-01-16 Thread Edward Haas
On Mon, Jan 15, 2018 at 5:13 PM, Dafna Ron  wrote:

> Hi,
>
> We had a failure in test 006_migrations.migrate_vm
> .
>
> the migration failed with reason "VMExists"
>
> Seems to be an issue which is caused by connectivity between engine and
> hosts.
> I remember this issue happening before a few weeks ago - is there a
> solution/bug for this issue?
>
>
>
> *Link and headline of suspected patches:
> https://gerrit.ovirt.org/#/c/86114/4 
> - net tests: Fix vlan creation name length in nettestlib*
>

This touched tests, not production code, so I do not think it is relevant.


>
> * Link to Job:*
>
>
>
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4842/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4842/artifact/
> (Relevant)
> error snippet from the log: *
>
>
>
>
>
>
>
>
>
> *vdsm dst:2018-01-15 06:47:03,355-0500 ERROR (jsonrpc/0) [api] FINISH
> create error=Virtual machine already exists (api:124)Traceback (most recent
> call last):  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py",
> line 117, in methodret = func(*args, **kwargs)  File
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 180, in create
> raise exception.VMExists()VMExists: Virtual machine already exists*
>
>
> *vdsm src: 2018-01-15 06:47:03,359-0500 ERROR (migsrc/d17a2482) [virt.vm] *
> (vmId='d17a2482-4904-4cbc-8d13-3a3b7840782d') migration destination
> error: Virtual machine already exists (migration:290
>
>
> *)*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Engine: 2018-01-15 06:45:30,169-05 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] Failure to refresh
> host 'lago-basic-suite-master-host-0' runtime info:
> java.net.ConnectException: Connection refused2018-01-15 06:45:30,169-05
> DEBUG [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedThreadFactory-engineScheduled-Thread-34) [] Exception:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
> java.net.ConnectException: Connection refusedat
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:159)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:122)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33)
> [dal.jar:]at
> org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387)
> [vdsbroker.jar:]at
> org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown
> Source) [vdsbroker.jar:]at
> sun.reflect.GeneratedMethodAccessor234.invoke(Unknown Source)
> [:1.8.0_151]at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_151]at
> java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at
> org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12)
> [common.jar:]at
> sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
> [:1.8.0_151]at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [rt.jar:1.8.0_151]at
> java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_151]
> at
> org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72)
> [weld-core-impl-2.4.3.Final.jar:2.4.3.Final]at
> 

Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ 15 Jan 2018 ] [ 004_basic_sanity.vm_run ]

2018-01-16 Thread Nir Soffer
This is a real error but it already fixed in master. Please update vdsm.

בתאריך יום ג׳, 16 בינו׳ 2018, 12:57, מאת Dafna Ron ‏:

> We had a second failure on the same root cause patch.
> Dan, can you please have a look?
>
>
>
> On Tue, Jan 16, 2018 at 10:50 AM, Dafna Ron  wrote:
>
>> Hi,
>>
>> We had a failure in OST on the upgrade suite.
>>
>>
>> *Link and headline of suspected patches: Reported as the cause:
>> https://gerrit.ovirt.org/#/c/86115/4 -
>> vm: support error_policy='report' for
>> CDROMs*Reported* as the root cause: *https://gerrit.ovirt.org/#/c/86114/4
>> net tests: Fix vlan creation name length in nettestlib
>> 
>> * *
>>
>> *Link to Job:*
>>
>>
>>
>>
>> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4854/
>> Link
>> to all
>> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4854/artifact/
>> (Relevant)
>> error snippet from the log: *
>>
>>
>> *vdsm: *type=file, path= threshold=unset at 0x7f57d8027d40>>
>> watermarkLimit:536870912 (vm:2134)
>> 2018-01-15 13:12:06,126-0500 ERROR (vm/75adb67b) [virt.vm]
>> (vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b') The vm start process failed
>> (vm:917)
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 846, in
>> _startUnderlyingVm
>> self._run()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2735, in
>> _run
>> domxml = hooks.before_vm_start(self._buildDomainXML(),
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2191, in
>> _buildDomainXML
>> return self._make_domain_xml()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2236, in
>> _make_domain_xml
>> devices_xml = self._process_devices()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2116, in
>> _process_devices
>> dev_xml = dev.getXML()
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
>> line 595, in getXML
>> diskelem.appendChild(_getDriverXML(self))
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
>> line 957, in _getDriverXML
>> if (drive['propagateErrors'] == 'on' or
>>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
>> line 292, in __getitem__
>> raise KeyError(key)
>> KeyError: 'propagateErrors'
>> 2018-01-15 13:12:06,129-0500 INFO  (vm/75adb67b) [virt.vm]
>> (vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b') Changed state to Down:
>> 'propagateErrors' (code=1) (vm:1636)
>>
>> engine:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *2018-01-15 13:12:07,203-05 ERROR
>> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
>> (ForkJoinPool-1-worker-15) [] Rerun VM
>> '75adb67b-4390-45f7-a677-8d4515cf4a2b'. Called from VDS
>> 'lago-upgrade-from-release-suite-master-host0'2018-01-15 13:12:07,205-05
>> INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
>> (ForkJoinPool-1-worker-15) [] VMs initialization finished for Host:
>> 'lago-upgrade-from-release-suite-master-host0:c98b0719-383b-4656-a28d-03005e2fb862'2018-01-15
>> 13:12:07,256-05 WARN
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-19) [] EVENT_ID:
>> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm0 on Host
>> lago-upgrade-from-release-suite-master-host0.2018-01-15 13:12:07,274-05
>> INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand]
>> (EE-ManagedThreadFactory-engine-Thread-19) [] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[75adb67b-4390-45f7-a677-8d4515cf4a2b=VM]',
>> sharedLocks=''}'2018-01-15 13:12:07,294-05 INFO
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-19) [] START,
>> IsVmDuringInitiatingVDSCommand(
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b'}),
>> log id: 41c281dc2018-01-15 13:12:07,294-05 INFO
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-19) [] FINISH,
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 41c281dc2018-01-15
>> 13:12:07,300-05 WARN  [org.ovirt.engine.core.bll.RunVmOnceCommand]
>> (EE-ManagedThreadFactory-engine-Thread-19) [] Validation of action
>> 'RunVmOnce' failed for user admin@internal-authz. Reasons:
>> VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS*
>>
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Trouble in switch type to OVS process

2018-01-16 Thread Petr Horacek
2018-01-16 11:05 GMT+01:00 Dmitry Semenov :

> 16.01.2018, 11:39, "Dan Kenigsberg" :
> > On Tue, Jan 16, 2018 at 10:22 AM, Sandro Bonazzola 
> wrote:
> >> Adding some relevant people
> >>
> >> 2018-01-15 23:13 GMT+01:00 Dmitry Semenov:
> >>> Hi everyone!
> >>>
> >>> I have installed oVirt ver 4.2 on three nodes with shared storage (FC)
> and linux bridge setting:
> >>> - node01
> >>>
> >>> - node02 (self-hosted Engine host)
> >>>
> >>> - node03 (self-hosted Engine host)
> >>>
> >>> I wanted to set cluster network switch type to OVS (for OVN)
> >>>
> >>> (followed this instruction https://ovirt.org/develop/
> release-management/features/network/provider-physical-network/)
> >
> > Please note that OVN overlays work just fine with Linux Bridge
> switchtype. You need to move to OVS only if you want OVN on non-overlays.
> >
> Dan,
> if I catch you right, I dont need to switch to OVS for creation of
> internal networks with DHCP (for vms) and their routing to external
> networks. Thats true?
>
Yes. You can use default Linux bridge networks. Create an OVN overlay
network with subnet for VMs. Then create a "router" VM that is connected to
both overlay and a physical network, configure routing there.

>
> >>> After this step “Set OVS networking on all vdsm hosts. For each host,
> enable Maintenance mode, Sync All Networks” - my node01 disappeared. I
> don’t know what to do further.
> >
> > In "disappeared" do you mean it became non-responsive? Would you share
> with us your supervdsm.log from the disappearing host?
> >
> After launch of network synchronization all settings of ovirtmgmt
> interface disappeared on node, after that node became unavailable.
>
> supervdsm.log: https://pastebin.com/eW0UpB6j

Thanks, will take a look.

>
>
> >>> For the rest 2 nodes (node02 and node03) i didn’t check this step.
> >>>
> >>> Each node has bond (from 2 NIC), and each bond has configuration with
> 3 VLANS for: ovirtmgmt, migration, display.
> >>>
> >>> How can I return node01 to cluster with OVS and switch other nodes to
> OVS?
> >>>
> >>> Best regards,
> >>>
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel
> >>
> >> --
> >>
> >> SANDRO BONAZZOLA
> >>
> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
> >>
> >> Red Hat EMEA
> >>
> >> TRIED. TESTED. TRUSTED.
>
>
> --
> Best regards
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ 15 Jan 2018 ] [ 004_basic_sanity.vm_run ]

2018-01-16 Thread Dafna Ron
We had a second failure on the same root cause patch.
Dan, can you please have a look?



On Tue, Jan 16, 2018 at 10:50 AM, Dafna Ron  wrote:

> Hi,
>
> We had a failure in OST on the upgrade suite.
>
>
> *Link and headline of suspected patches: Reported as the cause:
> https://gerrit.ovirt.org/#/c/86115/4 -
> vm: support error_policy='report' for
> CDROMs*Reported* as the root cause: *https://gerrit.ovirt.org/#/c/86114/4
> net tests: Fix vlan creation name length in nettestlib
> 
> * *
>
> *Link to Job:*
>
>
>
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4854/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4854/artifact/
> (Relevant)
> error snippet from the log: *
>
>
> *vdsm: *type=file, path= threshold=unset at 0x7f57d8027d40>>
> watermarkLimit:536870912 (vm:2134)
> 2018-01-15 13:12:06,126-0500 ERROR (vm/75adb67b) [virt.vm]
> (vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b') The vm start process failed
> (vm:917)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 846, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2735, in
> _run
> domxml = hooks.before_vm_start(self._buildDomainXML(),
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2191, in
> _buildDomainXML
> return self._make_domain_xml()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2236, in
> _make_domain_xml
> devices_xml = self._process_devices()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2116, in
> _process_devices
> dev_xml = dev.getXML()
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
> line 595, in getXML
> diskelem.appendChild(_getDriverXML(self))
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
> line 957, in _getDriverXML
> if (drive['propagateErrors'] == 'on' or
>   File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
> line 292, in __getitem__
> raise KeyError(key)
> KeyError: 'propagateErrors'
> 2018-01-15 13:12:06,129-0500 INFO  (vm/75adb67b) [virt.vm]
> (vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b') Changed state to Down:
> 'propagateErrors' (code=1) (vm:1636)
>
> engine:
>
>
>
>
>
>
>
>
>
>
>
>
>
> *2018-01-15 13:12:07,203-05 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
> (ForkJoinPool-1-worker-15) [] Rerun VM
> '75adb67b-4390-45f7-a677-8d4515cf4a2b'. Called from VDS
> 'lago-upgrade-from-release-suite-master-host0'2018-01-15 13:12:07,205-05
> INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
> (ForkJoinPool-1-worker-15) [] VMs initialization finished for Host:
> 'lago-upgrade-from-release-suite-master-host0:c98b0719-383b-4656-a28d-03005e2fb862'2018-01-15
> 13:12:07,256-05 WARN
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-19) [] EVENT_ID:
> USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm0 on Host
> lago-upgrade-from-release-suite-master-host0.2018-01-15 13:12:07,274-05
> INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand]
> (EE-ManagedThreadFactory-engine-Thread-19) [] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[75adb67b-4390-45f7-a677-8d4515cf4a2b=VM]',
> sharedLocks=''}'2018-01-15 13:12:07,294-05 INFO
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-19) [] START,
> IsVmDuringInitiatingVDSCommand(
> IsVmDuringInitiatingVDSCommandParameters:{vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b'}),
> log id: 41c281dc2018-01-15 13:12:07,294-05 INFO
> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-19) [] FINISH,
> IsVmDuringInitiatingVDSCommand, return: false, log id: 41c281dc2018-01-15
> 13:12:07,300-05 WARN  [org.ovirt.engine.core.bll.RunVmOnceCommand]
> (EE-ManagedThreadFactory-engine-Thread-19) [] Validation of action
> 'RunVmOnce' failed for user admin@internal-authz. Reasons:
> VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS*
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Subject: [ OST Failure Report ] [ oVirt Master ] [ 15 Jan 2018 ] [ 004_basic_sanity.vm_run ]

2018-01-16 Thread Dafna Ron
Hi,

We had a failure in OST on the upgrade suite.


*Link and headline of suspected patches: Reported as the cause:
https://gerrit.ovirt.org/#/c/86115/4 -
vm: support error_policy='report' for
CDROMs*Reported* as the root cause: *https://gerrit.ovirt.org/#/c/86114/4
net tests: Fix vlan creation name length in nettestlib

* *

*Link to Job:*




*http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4854/
Link
to all
logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4854/artifact/
(Relevant)
error snippet from the log: *


*vdsm: *type=file, path= threshold=unset at 0x7f57d8027d40>>
watermarkLimit:536870912 (vm:2134)
2018-01-15 13:12:06,126-0500 ERROR (vm/75adb67b) [virt.vm]
(vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b') The vm start process failed
(vm:917)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 846, in
_startUnderlyingVm
self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2735, in
_run
domxml = hooks.before_vm_start(self._buildDomainXML(),
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2191, in
_buildDomainXML
return self._make_domain_xml()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2236, in
_make_domain_xml
devices_xml = self._process_devices()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2116, in
_process_devices
dev_xml = dev.getXML()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
line 595, in getXML
diskelem.appendChild(_getDriverXML(self))
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
line 957, in _getDriverXML
if (drive['propagateErrors'] == 'on' or
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vmdevices/storage.py",
line 292, in __getitem__
raise KeyError(key)
KeyError: 'propagateErrors'
2018-01-15 13:12:06,129-0500 INFO  (vm/75adb67b) [virt.vm]
(vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b') Changed state to Down:
'propagateErrors' (code=1) (vm:1636)

engine:













*2018-01-15 13:12:07,203-05 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-15) [] Rerun VM
'75adb67b-4390-45f7-a677-8d4515cf4a2b'. Called from VDS
'lago-upgrade-from-release-suite-master-host0'2018-01-15 13:12:07,205-05
INFO  [org.ovirt.engine.core.vdsbroker.VdsManager]
(ForkJoinPool-1-worker-15) [] VMs initialization finished for Host:
'lago-upgrade-from-release-suite-master-host0:c98b0719-383b-4656-a28d-03005e2fb862'2018-01-15
13:12:07,256-05 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-19) [] EVENT_ID:
USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM vm0 on Host
lago-upgrade-from-release-suite-master-host0.2018-01-15 13:12:07,274-05
INFO  [org.ovirt.engine.core.bll.RunVmOnceCommand]
(EE-ManagedThreadFactory-engine-Thread-19) [] Lock Acquired to object
'EngineLock:{exclusiveLocks='[75adb67b-4390-45f7-a677-8d4515cf4a2b=VM]',
sharedLocks=''}'2018-01-15 13:12:07,294-05 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-19) [] START,
IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='75adb67b-4390-45f7-a677-8d4515cf4a2b'}),
log id: 41c281dc2018-01-15 13:12:07,294-05 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-19) [] FINISH,
IsVmDuringInitiatingVDSCommand, return: false, log id: 41c281dc2018-01-15
13:12:07,300-05 WARN  [org.ovirt.engine.core.bll.RunVmOnceCommand]
(EE-ManagedThreadFactory-engine-Thread-19) [] Validation of action
'RunVmOnce' failed for user admin@internal-authz. Reasons:
VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS*
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Trouble in switch type to OVS process

2018-01-16 Thread Dmitry Semenov
16.01.2018, 11:39, "Dan Kenigsberg" :
> On Tue, Jan 16, 2018 at 10:22 AM, Sandro Bonazzola  
> wrote:
>> Adding some relevant people
>>
>> 2018-01-15 23:13 GMT+01:00 Dmitry Semenov:
>>> Hi everyone!
>>>
>>> I have installed oVirt ver 4.2 on three nodes with shared storage (FC) and 
>>> linux bridge setting:
>>> - node01
>>>
>>> - node02 (self-hosted Engine host)
>>>
>>> - node03 (self-hosted Engine host)
>>>
>>> I wanted to set cluster network switch type to OVS (for OVN)
>>>
>>> (followed this instruction 
>>> https://ovirt.org/develop/release-management/features/network/provider-physical-network/)
>
> Please note that OVN overlays work just fine with Linux Bridge switchtype. 
> You need to move to OVS only if you want OVN on non-overlays.
>
Dan,
if I catch you right, I dont need to switch to OVS for creation of internal 
networks with DHCP (for vms) and their routing to external networks. Thats true?

>>> After this step “Set OVS networking on all vdsm hosts. For each host, 
>>> enable Maintenance mode, Sync All Networks” - my node01 disappeared. I 
>>> don’t know what to do further.
>
> In "disappeared" do you mean it became non-responsive? Would you share with 
> us your supervdsm.log from the disappearing host?
>
After launch of network synchronization all settings of ovirtmgmt interface 
disappeared on node, after that node became unavailable.

supervdsm.log: https://pastebin.com/eW0UpB6j

>>> For the rest 2 nodes (node02 and node03) i didn’t check this step.
>>>
>>> Each node has bond (from 2 NIC), and each bond has configuration with 3 
>>> VLANS for: ovirtmgmt, migration, display.
>>>
>>> How can I return node01 to cluster with OVS and switch other nodes to OVS?
>>>
>>> Best regards,
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA
>>
>> TRIED. TESTED. TRUSTED.


-- 
Best regards
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Trouble in switch type to OVS process

2018-01-16 Thread Dan Kenigsberg
On Tue, Jan 16, 2018 at 10:22 AM, Sandro Bonazzola 
wrote:

> Adding some relevant people
>
> 2018-01-15 23:13 GMT+01:00 Dmitry Semenov :
>
>> Hi everyone!
>>
>>
>> I have installed oVirt ver 4.2 on three nodes with shared storage (FC)
>> and linux bridge setting:
>> - node01
>>
>> - node02 (self-hosted Engine host)
>>
>> - node03 (self-hosted Engine host)
>>
>>
>> I wanted to set cluster network switch type to OVS (for OVN)
>>
>> (followed this instruction
>> *https://ovirt.org/develop/release-management/features/network/provider-physical-network/*
>> 
>> )
>>
>
Please note that OVN overlays work just fine with Linux Bridge switchtype.
You need to move to OVS only if you want OVN on non-overlays.


>>
>> After this step “*Set OVS networking on all vdsm hosts.* For each host,
>> enable Maintenance mode, Sync All Networks” - my node01 disappeared. I
>> don’t know what to do further.
>>
>
In "disappeared" do you mean it became non-responsive? Would you share with
us your supervdsm.log from the disappearing host?

For the rest 2 nodes (node02 and node03) i didn’t check this step.
>>
>>
>> Each node has bond (from 2 NIC), and each bond has configuration with 3
>> VLANS for: ovirtmgmt, migration, display.
>>
>>
>> How can I return node01 to cluster with OVS and switch other nodes to OVS?
>>
>> Best regards,
>>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel