[ovirt-devel] qemu-kvm-ev-2.9.0-16.el7_4.5.1 now available for testing

2017-09-11 Thread Sandro Bonazzola
Hi, qemu-kvm-ev-2.9.0-16.el7_4.5.1
 is now available for
testing.
Please note that CR repo is still needed for satisfy dependencies until
CentOS 7.4.1708 is released.

Here's the changelog:

* Thu Aug 24 2017 Sandro Bonazzola  -
ev-2.9.0-16.el7_4.5.1 - Removing RH branding from package name * Wed Aug 23
2017 Miroslav Rezanina  - rhev-2.9.0-16.el7_4.5 -
kvm-trace-use-static-event-ID-mapping-in-simpletrace.stp.patch [bz#1482515]
- kvm-simpletrace-fix-flight-recorder-no-header-option.patch [bz#1482515] -
kvm-exec-abstract-address_space_do_translate.patch [bz#1482856] -
kvm-redhat-requires-for-the-ipxe-seabios-that-supports-I.patch [bz#1482851]
- Resolves: bz#1482515 ([Tracing] capturing trace data failed [rhel-7.4.z])
- Resolves: bz#1482851 (Requires for the seabios version that support
vIOMMU of virtio [rhel-7.4.z]) - Resolves: bz#1482856 (Unable to start
vhost if iommu_platform=on but intel_iommu=on not specified in guest
[rhel-7.4.z]) * Tue Aug 15 2017 Miroslav Rezanina  -
rhev-2.9.0-16.el7_4.4 - kvm-nbd-strict-nbd_wr_syncv.patch [bz#1467509] -
kvm-nbd-read_sync-and-friends-return-0-on-success.patch [bz#1467509] -
kvm-nbd-make-nbd_drop-public.patch [bz#1467509] -
kvm-nbd-server-get-rid-of-nbd_negotiate_read-and-friends.patch [bz#1467509]
- Resolves: bz#1467509 (CVE-2017-7539 Qemu quit abnormally when connecting
to built-in NBD server [rhel-7.4.z]) * Thu Jul 20 2017 Miroslav Rezanina <
mreza...@redhat.com> - rhev-2.9.0-16.el7.3 -
kvm-block-Skip-implicit-nodes-in-query-block-blockstats.patch [bz#1473145]
- Resolves: bz#1473145 (Wrong allocation value after virDomainBlockCopy()
(alloc=capacity)) * Tue Jul 18 2017 Miroslav Rezanina 
- rhev-2.9.0-16.el7.2 -
kvm-virtio-net-enable-configurable-tx-queue-size.patch [bz#1471666] -
kvm-virtio-net-fix-tx-queue-size-for-vhost-user.patch [bz#1471666] -
Resolves: bz#1471666 (virtio-net: enable configurable tx queue size) * Mon
Jul 17 2017 Miroslav Rezanina  - rhev-2.9.0-16.el7_4.1
- kvm-virtio-scsi-finalize-IOMMU-support.patch [bz#1471076] -
kvm-qemu-nbd-Ignore-SIGPIPE.patch [bz#1468108] - Resolves: bz#1468108
(CVE-2017-10664 qemu-kvm-rhev: Qemu: qemu-nbd: server breaks with SIGPIPE
upon client abort [rhel-7.4.z]) - Resolves: bz#1471076 (unbreak virtio-scsi
for vIOMMU) * Tue Jul 04 2017 Miroslav Rezanina  -
2.9.0-16.el7 - kvm-AArch64-Add-pci-testdev.patch [bz#1465048] - Resolves:
bz#1465048 (AArch64: Add pci-testdev) * Tue Jun 27 2017 Miroslav Rezanina <
mreza...@redhat.com> - 2.9.0-15.el7 -
kvm-hw-ppc-spapr-Adjust-firmware-name-for-PCI-bridges.patch [bz#1459170] -
Resolves: bz#1459170 (SLOF: Can't boot from virtio-scsi disk behind
pci-bridge: E3405: No such device) * Fri Jun 23 2017 Miroslav Rezanina <
mreza...@redhat.com> - 2.9.0-14.el7 -
kvm-sockets-ensure-we-can-bind-to-both-ipv4-ipv6-separat.patch [bz#1446003]
- Resolves: bz#1446003 (vnc cannot find a free port to use) * Tue Jun 20
2017 Miroslav Rezanina  - 2.9.0-13.el7 -
kvm-linux-headers-update.patch [bz#1462061] -
kvm-all-Pass-an-error-object-to-kvm_device_access.patch [bz#1462061] -
kvm-hw-intc-arm_gicv3_its-Implement-state-save-restore.patch [bz#1462061] -
kvm-hw-intc-arm_gicv3_kvm-Implement-pending-table-save.patch [bz#1462061] -
kvm-hw-intc-arm_gicv3_its-Allow-save-restore.patch [bz#1462061] - Resolves:
bz#1462061 (Backport QEMU ITS migration series) * Tue Jun 20 2017 Miroslav
Rezanina  - 2.9.0-12.el7 -
kvm-pseries-Correct-panic-behaviour-for-pseries-machine-.patch [bz#1458705]
- kvm-virtio-scsi-Reject-scsi-cd-if-data-plane-enabled-RHE.patch
[bz#1378816] - kvm-block-rbd-enable-filename-option-and-parsing.patch
[bz#1457088] - kvm-block-iscsi-enable-filename-option-and-parsing.patch
[bz#1457088] - kvm-nbd-fix-NBD-over-TLS-bz1461827.patch [bz#1461827] -
kvm-monitor-add-handle_hmp_command-trace-event.patch [bz#1457740] -
kvm-monitor-resurrect-handle_qmp_command-trace-event.patch [bz#1457740] -
kvm-hw-pcie-fix-the-generic-pcie-root-port-to-support-mi.patch [bz#1455150]
- Resolves: bz#1378816 (Core dump when use "data-plane" and execute change
cd) - Resolves: bz#1455150 (Unable to detach virtio disk from
pcie-root-port after migration) - Resolves: bz#1457088 (rbd/iscsi: json:
pseudo-protocol format is incompatible with 7.3) - Resolves: bz#1457740
([Tracing] compling qemu-kvm failed through systemtap) - Resolves:
bz#1458705 (pvdump: QMP reports "GUEST_PANICKED" event but HMP still shows
VM running after guest crashed) - Resolves: bz#1461827 (QEMU hangs in aio
wait when trying to access NBD volume over TLS) * Fri Jun 16 2017 Miroslav
Rezanina  - 2.9.0-11.el7 -
kvm-Enable-USB_CONFIG-for-aarch64.patch [bz#1460010] - Resolves: bz#1460010
(USB HID (keyboard and tablet) missing [aarch64]) * Tue Jun 13 2017
Miroslav Rezanina  - rhev-2.9.0-10.el7 -

[ovirt-devel] ovirt nfs mount caused sanlock failed to access data storage

2017-09-11 Thread pengyixiang
hello,everyone
sanlock failed due to cannot read nfs storage's data , i tried to chmod 777 
/rhev/data-center/mnt/192.168.11.55\:_home_dataStorage/1845be22-1ac4-4e42-bbcb-7ba9ccd6e569/dom_md/*(add
 others permission), then it's ok


sanlock's log:
425120 Traceback (most recent call last):
425121   File "/usr/lib/python2.7/dist-packages/vdsm/storage/task.py", line 
878, in _run
425122 return fn(*args, **kargs)
425123   File "/usr/lib/python2.7/dist-packages/vdsm/logUtils.py", line 52, in 
wrapper
425124 res = f(*args, **kwargs)
425125   File "/usr/share/vdsm/storage/hsm.py", line 619, in getSpmStatus
425126 status = self._getSpmStatusInfo(pool)
425127   File "/usr/share/vdsm/storage/hsm.py", line 613, in _getSpmStatusInfo
425128 (pool.spmRole,) + pool.getSpmStatus()))
425129   File "/usr/share/vdsm/storage/sp.py", line 141, in getSpmStatus
425130 return self._backend.getSpmStatus()
425131   File "/usr/share/vdsm/storage/spbackends.py", line 433, in getSpmStatus
425132 lVer, spmId = self.masterDomain.inquireClusterLock()
425133   File "/usr/share/vdsm/storage/sd.py", line 817, in inquireClusterLock
425134 return self._manifest.inquireDomainLock()
425135   File "/usr/share/vdsm/storage/sd.py", line 522, in inquireDomainLock
425136 return self._domainLock.inquire(self.getDomainLease())
425137   File "/usr/lib/python2.7/dist-packages/vdsm/storage/clusterlock.py", 
line 372, in i   nquire
425138 resource = sanlock.read_resource(lease.path, lease.offset)
425139 SanlockException: (13, 'Sanlock resource read failure', 'Permission 
denied')


i test it, and in node,I add user "linx" to group "kvm"

$ cat /etc/group | grep "kvm"
kvm:x:112:qemu,vdsm,linx,sanlock


then i create a file in $HOME:
$ ls -l
总用量 16
-rw-rw 1 vdsm kvm 6 9月  11 20:06 1.txt
drwxr-xr-x 9 linx linx 4096 9月   1 15:58 linx-virtualization
drw-rw 3 linx linx 4096 9月  11 20:13 test2
drw-rw 2 linx linx 4096 9月  11 20:19 test3


then we can view the file in user "linx":
$ cat 1.txt
pencc


leases if vdsm:kvm too:
$ ls -l 
/rhev/data-center/mnt/192.168.11.55\:_home_dataStorage/1845be22-1ac4-4e42-bbcb-7ba9ccd6e569/dom_md/leases
-rw-rw 1 vdsm kvm 2097152 9月  11 19:21 
/rhev/data-center/mnt/192.168.11.55:_home_dataStorage/1845be22-1ac4-4e42-bbcb-7ba9ccd6e569/dom_md/leases


but we cannot read the file in user "linx":
$ cat 
/rhev/data-center/mnt/192.168.11.55\:_home_dataStorage/1845be22-1ac4-4e42-bbcb-7ba9ccd6e569/dom_md/leases
cat: 
'/rhev/data-center/mnt/192.168.11.55:_home_dataStorage/1845be22-1ac4-4e42-bbcb-7ba9ccd6e569/dom_md/leases':
 权限不够



why is this? follows the nfs server configure
# cat /etc/exports

/home/dataStorage 192.168.11.*(rw,sync)
/home/dataStorage2 192.168.11.*(rw,sync,no_root_squash,no_subtree_check)
/home/isoStorage 192.168.11.*(rw,sync,no_root_squash,no_subtree_check)



Is my nfs-server configurations miss some arguments? have any idea?



















 ___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Roy Golan
On Mon, 11 Sep 2017 at 23:41 Dan Kenigsberg  wrote:

> On Mon, Sep 11, 2017 at 10:47 PM, Roy Golan  wrote:
> >
> >
> > On Mon, 11 Sep 2017 at 22:34 Allon Mureinik  wrote:
> >>
> >> That was more a less the my guess from the brief look I took at the
> >> stacktrace.
> >>
> >> I don't want to +2 a patch in the network area, but I'd suggest merging
> >> this patch (which seems right regardless of the recent failure), and
> see if
> >> it solves the OST issue.
> >>
> > OST will run with the network patch reverted, so I need to bring it into
> the
> > Q again (revert the reverted :))
>
> Even I can do that. I've rebase your patches on top of current master,
> and reintroduced Mucha's patch (https://gerrit.ovirt.org/#/c/81639/).
> If OST passes, I'll ask you to backport your injection fixes - we need
> to solve the MAC collision bug in ovirt-4.1, too.
>
> http://jenkins.ovirt.org/job/ovirt-system-tests_manual/1129/ is green

>
> >
> > On my side, with the MacPool changes it works, imported vms from domains,
> > started vm, basic editing.
> >
> > To add to that, CoCoAsyncTaskHelper is another @Singleton EJB, but if I'm
> > not mistaken here it should be available _before_ the Backend in order to
> > keep compensation running? in case we have a complete action that needs
> to
> > during compensation.
> > Maybe this is a reason for MacPool to NOT depend on Backend, so
> compensation
> > would complete?
> > Needs a second look.
> >
> >
> >> On Mon, Sep 11, 2017 at 9:47 PM, Roy Golan  wrote:
> >>>
> >>>
> >>>
> >>> On Mon, 11 Sep 2017 at 21:30 Roy Golan  wrote:
> 
>  I think I solved it with https://gerrit.ovirt.org/#/c/81636/  so the
>  revert can be reverted. +Martin Mucha want to test it? It is working
> on my
>  end.
> 
> >>> sorry its this https://gerrit.ovirt.org/c/81637/2
> 
> 
>  On Mon, 11 Sep 2017 at 18:41 Dan Kenigsberg 
> wrote:
> >
> > On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor 
> wrote:
> > >
> > > Hello all,
> > > this same error still causes OST to fail.
> > >
> >
> > We are aware of it, but do not understand it at all.
> >
> > Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
> > would be merged once approved by CI.
> >
> > >
> > > Jobs:
> > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
> > >
> > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
> > >
> > > Logs:
> > >
> > >
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> > >
> > >
> > >
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> > >
> > >
> > > On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri 
> wrote:
> > >>
> > >>
> > >>
> > >> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg <
> dan...@redhat.com>
> > >> wrote:
> > >>>
> > >>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg <
> dan...@redhat.com>
> > >>> wrote:
> > >>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
> > >>> >  wrote:
> > >>> >> Test failed: [ import_template_from_glance ]
> > >>> >>
> > >>> >> Link to suspected patches:
> > >>> >> https://gerrit.ovirt.org/#/c/80450/
> > >>> >
> > >>> > Martin, this is "core: when initializing MacPool also register
> in
> > >>> > it
> > >>> > nics in snapshots"
> > >>> > and the error seems somewhat related to it
> > >>>
> > >>> Gil, can you tell us which is the last Engine patch that passed
> > >>> OST? I
> > >>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm
> guessing
> > >>> that "ci please build" does not work on merged patches.
> > >>
> > >>
> > >> Not sure I follow, the latest working engine should be deployed to
> > >> tested repo,
> > >> So if you'll run the manual job w/o any custom repos it should
> work
> > >> ( if it doesn't, then it means something slipped in the tested
> repo and we
> > >> should investigate ).
> > >>
> > >> But anyhow, your 'ci please build' also worked, and produced RPMs
> on
> > >> the job, so you can still try it:
> > >>
> > >> Patch Set 26:
> > >> Build Successful
> > >>
> > >>
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-fc25-x86_64/377/
> > >> : SUCCESS
> > >>
> > >>
> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Martin Perina
On Mon, Sep 11, 2017 at 9:47 PM, Roy Golan  wrote:

>
>
> On Mon, 11 Sep 2017 at 22:34 Allon Mureinik  wrote:
>
>> That was more a less the my guess from the brief look I took at the
>> stacktrace.
>>
>> I don't want to +2 a patch in the network area, but I'd suggest merging
>> this patch (which seems right regardless of the recent failure), and see if
>> it solves the OST issue.
>>
>> OST will run with the network patch reverted, so I need to bring it into
> the Q again (revert the reverted :))
>
> On my side, with the MacPool changes it works, imported vms from domains,
> started vm, basic editing.
>
> To add to that, CoCoAsyncTaskHelper is another @Singleton EJB, but if I'm
> not mistaken here it should be available _before_ the Backend in order to
> keep compensation running? in case we have a complete action that needs to
> during compensation.
> Maybe this is a reason for MacPool to NOT depend on Backend, so
> compensation would complete?
> Needs a second look.
>
>>
​Ravi/Moti, could you please comment?
​


> On Mon, Sep 11, 2017 at 9:47 PM, Roy Golan  wrote:
>>
>>>
>>>
>>> On Mon, 11 Sep 2017 at 21:30 Roy Golan  wrote:
>>>
 I think I solved it with https://gerrit.ovirt.org/#/c/81636/  so the
 revert can be reverted. +Martin Mucha  want to test
 it? It is working on my end.

 sorry its this https://gerrit.ovirt.org/c/81637/2
>>>

 On Mon, 11 Sep 2017 at 18:41 Dan Kenigsberg  wrote:

> On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor 
> wrote:
> >
> > Hello all,
> > this same error still causes OST to fail.
> >
>
> We are aware of it, but do not understand it at all.
>
> Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
> would be merged once approved by CI.
>
> >
> > Jobs:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
> >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
> >
> > Logs:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/2466/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/2471/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >
> >
> > On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:
> >>
> >>
> >>
> >> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg 
> wrote:
> >>>
> >>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg 
> wrote:
> >>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
> >>> >  wrote:
> >>> >> Test failed: [ import_template_from_glance ]
> >>> >>
> >>> >> Link to suspected patches:
> >>> >> https://gerrit.ovirt.org/#/c/80450/
> >>> >
> >>> > Martin, this is "core: when initializing MacPool also register
> in it
> >>> > nics in snapshots"
> >>> > and the error seems somewhat related to it
> >>>
> >>> Gil, can you tell us which is the last Engine patch that passed
> OST? I
> >>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm
> guessing
> >>> that "ci please build" does not work on merged patches.
> >>
> >>
> >> Not sure I follow, the latest working engine should be deployed to
> tested repo,
> >> So if you'll run the manual job w/o any custom repos it should work
> ( if it doesn't, then it means something slipped in the tested repo and we
> should investigate ).
> >>
> >> But anyhow, your 'ci please build' also worked, and produced RPMs
> on the job, so you can still try it:
> >>
> >> Patch Set 26:
> >> Build Successful
> >> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
> artifacts-on-demand-fc25-x86_64/377/ : SUCCESS
> >> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
> artifacts-on-demand-el7-x86_64/428/ : SUCCESS
> >>
> >>
> >>>
> >>>
> >>> >
> >>> >>
> >>> >> Link to Job:
> >>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/2397/
> >>> >>
> >>> >> Link to logs:
> >>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/2397/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >>> >>
> >>> >> Error snippet from log:
> >>> >>
> >>> >> 
> >>> >>
> >>> >> 2017-09-07 08:21:48,657-04 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Roy Golan
On Mon, 11 Sep 2017 at 22:34 Allon Mureinik  wrote:

> That was more a less the my guess from the brief look I took at the
> stacktrace.
>
> I don't want to +2 a patch in the network area, but I'd suggest merging
> this patch (which seems right regardless of the recent failure), and see if
> it solves the OST issue.
>
> OST will run with the network patch reverted, so I need to bring it into
the Q again (revert the reverted :))

On my side, with the MacPool changes it works, imported vms from domains,
started vm, basic editing.

To add to that, CoCoAsyncTaskHelper is another @Singleton EJB, but if I'm
not mistaken here it should be available _before_ the Backend in order to
keep compensation running? in case we have a complete action that needs to
during compensation.
Maybe this is a reason for MacPool to NOT depend on Backend, so
compensation would complete?
Needs a second look.


On Mon, Sep 11, 2017 at 9:47 PM, Roy Golan  wrote:
>
>>
>>
>> On Mon, 11 Sep 2017 at 21:30 Roy Golan  wrote:
>>
>>> I think I solved it with https://gerrit.ovirt.org/#/c/81636/  so the
>>> revert can be reverted. +Martin Mucha  want to test
>>> it? It is working on my end.
>>>
>>> sorry its this https://gerrit.ovirt.org/c/81637/2
>>
>>>
>>> On Mon, 11 Sep 2017 at 18:41 Dan Kenigsberg  wrote:
>>>
 On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor  wrote:
 >
 > Hello all,
 > this same error still causes OST to fail.
 >

 We are aware of it, but do not understand it at all.

 Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
 would be merged once approved by CI.

 >
 > Jobs:
 > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
 >
 > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
 >
 > Logs:
 >
 http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
 >
 >
 http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
 >
 >
 > On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:
 >>
 >>
 >>
 >> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg 
 wrote:
 >>>
 >>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg 
 wrote:
 >>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
 >>> >  wrote:
 >>> >> Test failed: [ import_template_from_glance ]
 >>> >>
 >>> >> Link to suspected patches:
 >>> >> https://gerrit.ovirt.org/#/c/80450/
 >>> >
 >>> > Martin, this is "core: when initializing MacPool also register in
 it
 >>> > nics in snapshots"
 >>> > and the error seems somewhat related to it
 >>>
 >>> Gil, can you tell us which is the last Engine patch that passed
 OST? I
 >>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
 >>> that "ci please build" does not work on merged patches.
 >>
 >>
 >> Not sure I follow, the latest working engine should be deployed to
 tested repo,
 >> So if you'll run the manual job w/o any custom repos it should work
 ( if it doesn't, then it means something slipped in the tested repo and we
 should investigate ).
 >>
 >> But anyhow, your 'ci please build' also worked, and produced RPMs on
 the job, so you can still try it:
 >>
 >> Patch Set 26:
 >> Build Successful
 >>
 http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-fc25-x86_64/377/
 : SUCCESS
 >>
 http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/428/
 : SUCCESS
 >>
 >>
 >>>
 >>>
 >>> >
 >>> >>
 >>> >> Link to Job:
 >>> >>
 http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
 >>> >>
 >>> >> Link to logs:
 >>> >>
 http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
 >>> >>
 >>> >> Error snippet from log:
 >>> >>
 >>> >> 
 >>> >>
 >>> >> 2017-09-07 08:21:48,657-04 INFO
 >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
 >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
 Lock Acquired
 >>> >> to object
 >>> >>
 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
 >>> >> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Allon Mureinik
That was more a less the my guess from the brief look I took at the
stacktrace.

I don't want to +2 a patch in the network area, but I'd suggest merging
this patch (which seems right regardless of the recent failure), and see if
it solves the OST issue.

On Mon, Sep 11, 2017 at 9:47 PM, Roy Golan  wrote:

>
>
> On Mon, 11 Sep 2017 at 21:30 Roy Golan  wrote:
>
>> I think I solved it with https://gerrit.ovirt.org/#/c/81636/  so the
>> revert can be reverted. +Martin Mucha  want to test
>> it? It is working on my end.
>>
>> sorry its this https://gerrit.ovirt.org/c/81637/2
>
>>
>> On Mon, 11 Sep 2017 at 18:41 Dan Kenigsberg  wrote:
>>
>>> On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor  wrote:
>>> >
>>> > Hello all,
>>> > this same error still causes OST to fail.
>>> >
>>>
>>> We are aware of it, but do not understand it at all.
>>>
>>> Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
>>> would be merged once approved by CI.
>>>
>>> >
>>> > Jobs:
>>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
>>> >
>>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
>>> >
>>> > Logs:
>>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-
>>> tester/2466/artifact/exported-artifacts/basic-suit-master-
>>> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
>>> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>>> >
>>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-
>>> tester/2471/artifact/exported-artifacts/basic-suit-master-
>>> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
>>> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>>> >
>>> >
>>> > On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:
>>> >>
>>> >>
>>> >>
>>> >> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg 
>>> wrote:
>>> >>>
>>> >>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg 
>>> wrote:
>>> >>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
>>> >>> >  wrote:
>>> >>> >> Test failed: [ import_template_from_glance ]
>>> >>> >>
>>> >>> >> Link to suspected patches:
>>> >>> >> https://gerrit.ovirt.org/#/c/80450/
>>> >>> >
>>> >>> > Martin, this is "core: when initializing MacPool also register in
>>> it
>>> >>> > nics in snapshots"
>>> >>> > and the error seems somewhat related to it
>>> >>>
>>> >>> Gil, can you tell us which is the last Engine patch that passed OST?
>>> I
>>> >>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
>>> >>> that "ci please build" does not work on merged patches.
>>> >>
>>> >>
>>> >> Not sure I follow, the latest working engine should be deployed to
>>> tested repo,
>>> >> So if you'll run the manual job w/o any custom repos it should work (
>>> if it doesn't, then it means something slipped in the tested repo and we
>>> should investigate ).
>>> >>
>>> >> But anyhow, your 'ci please build' also worked, and produced RPMs on
>>> the job, so you can still try it:
>>> >>
>>> >> Patch Set 26:
>>> >> Build Successful
>>> >> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
>>> artifacts-on-demand-fc25-x86_64/377/ : SUCCESS
>>> >> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
>>> artifacts-on-demand-el7-x86_64/428/ : SUCCESS
>>> >>
>>> >>
>>> >>>
>>> >>>
>>> >>> >
>>> >>> >>
>>> >>> >> Link to Job:
>>> >>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
>>> tester/2397/
>>> >>> >>
>>> >>> >> Link to logs:
>>> >>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
>>> tester/2397/artifact/exported-artifacts/basic-suit-master-
>>> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
>>> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>>> >>> >>
>>> >>> >> Error snippet from log:
>>> >>> >>
>>> >>> >> 
>>> >>> >>
>>> >>> >> 2017-09-07 08:21:48,657-04 INFO
>>> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>>> Lock Acquired
>>> >>> >> to object
>>> >>> >> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-
>>> 9744309ca8c1=TEMPLATE,
>>> >>> >> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]',
>>> sharedLocks='[]'}'
>>> >>> >> 2017-09-07 08:21:48,675-04 INFO
>>> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>>> Running
>>> >>> >> command: AddVmTemplateCommand internal: true. Entities affected
>>> :  ID:
>>> >>> >> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction
>>> group
>>> >>> >> CREATE_TEMPLATE with role type USER
>>> >>> >> 2017-09-07 08:21:48,695-04 INFO
>>> >>> >> [org.ovirt.engine.core.bll.storage.disk.
>>> CreateAllTemplateDisksCommand]
>>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>>> Running
>>> >>> >> command: CreateAllTemplateDisksCommand internal: 

[ovirt-devel] Tip for a screen-recorder

2017-09-11 Thread Jakub Niedermertl
Hi,

recently I stumbled upon Peek [1], nice one-button screen recorder. It
* records selected rectangle of the screen,
* can produce gif / webm / mp4.
* is packaged for most linux distributions
* and just works.
I find it great for bug reporting.

[1]: https://github.com/phw/peek
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Roy Golan
On Mon, 11 Sep 2017 at 21:30 Roy Golan  wrote:

> I think I solved it with https://gerrit.ovirt.org/#/c/81636/  so the
> revert can be reverted. +Martin Mucha  want to test
> it? It is working on my end.
>
> sorry its this https://gerrit.ovirt.org/c/81637/2

>
> On Mon, 11 Sep 2017 at 18:41 Dan Kenigsberg  wrote:
>
>> On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor  wrote:
>> >
>> > Hello all,
>> > this same error still causes OST to fail.
>> >
>>
>> We are aware of it, but do not understand it at all.
>>
>> Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
>> would be merged once approved by CI.
>>
>> >
>> > Jobs:
>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
>> >
>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
>> >
>> > Logs:
>> >
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>> >
>> >
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>> >
>> >
>> > On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:
>> >>
>> >>
>> >>
>> >> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg 
>> wrote:
>> >>>
>> >>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg 
>> wrote:
>> >>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
>> >>> >  wrote:
>> >>> >> Test failed: [ import_template_from_glance ]
>> >>> >>
>> >>> >> Link to suspected patches:
>> >>> >> https://gerrit.ovirt.org/#/c/80450/
>> >>> >
>> >>> > Martin, this is "core: when initializing MacPool also register in it
>> >>> > nics in snapshots"
>> >>> > and the error seems somewhat related to it
>> >>>
>> >>> Gil, can you tell us which is the last Engine patch that passed OST? I
>> >>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
>> >>> that "ci please build" does not work on merged patches.
>> >>
>> >>
>> >> Not sure I follow, the latest working engine should be deployed to
>> tested repo,
>> >> So if you'll run the manual job w/o any custom repos it should work (
>> if it doesn't, then it means something slipped in the tested repo and we
>> should investigate ).
>> >>
>> >> But anyhow, your 'ci please build' also worked, and produced RPMs on
>> the job, so you can still try it:
>> >>
>> >> Patch Set 26:
>> >> Build Successful
>> >>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-fc25-x86_64/377/
>> : SUCCESS
>> >>
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/428/
>> : SUCCESS
>> >>
>> >>
>> >>>
>> >>>
>> >>> >
>> >>> >>
>> >>> >> Link to Job:
>> >>> >>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
>> >>> >>
>> >>> >> Link to logs:
>> >>> >>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>> >>> >>
>> >>> >> Error snippet from log:
>> >>> >>
>> >>> >> 
>> >>> >>
>> >>> >> 2017-09-07 08:21:48,657-04 INFO
>> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> Lock Acquired
>> >>> >> to object
>> >>> >>
>> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
>> >>> >> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]',
>> sharedLocks='[]'}'
>> >>> >> 2017-09-07 08:21:48,675-04 INFO
>> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> Running
>> >>> >> command: AddVmTemplateCommand internal: true. Entities affected :
>> ID:
>> >>> >> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
>> >>> >> CREATE_TEMPLATE with role type USER
>> >>> >> 2017-09-07 08:21:48,695-04 INFO
>> >>> >>
>> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> Running
>> >>> >> command: CreateAllTemplateDisksCommand internal: true.
>> >>> >> 2017-09-07 08:21:48,722-04 INFO
>> >>> >> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> transaction
>> >>> >> rolled back
>> >>> >> 2017-09-07 08:21:48,722-04 ERROR
>> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> Command
>> >>> >> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
>> 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Roy Golan
I think I solved it with https://gerrit.ovirt.org/#/c/81636/  so the revert
can be reverted. +Martin Mucha  want to test it? It is
working on my end.

On Mon, 11 Sep 2017 at 18:41 Dan Kenigsberg  wrote:

> On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor  wrote:
> >
> > Hello all,
> > this same error still causes OST to fail.
> >
>
> We are aware of it, but do not understand it at all.
>
> Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
> would be merged once approved by CI.
>
> >
> > Jobs:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
> >
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
> >
> > Logs:
> >
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >
> >
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >
> >
> > On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:
> >>
> >>
> >>
> >> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg 
> wrote:
> >>>
> >>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg 
> wrote:
> >>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
> >>> >  wrote:
> >>> >> Test failed: [ import_template_from_glance ]
> >>> >>
> >>> >> Link to suspected patches:
> >>> >> https://gerrit.ovirt.org/#/c/80450/
> >>> >
> >>> > Martin, this is "core: when initializing MacPool also register in it
> >>> > nics in snapshots"
> >>> > and the error seems somewhat related to it
> >>>
> >>> Gil, can you tell us which is the last Engine patch that passed OST? I
> >>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
> >>> that "ci please build" does not work on merged patches.
> >>
> >>
> >> Not sure I follow, the latest working engine should be deployed to
> tested repo,
> >> So if you'll run the manual job w/o any custom repos it should work (
> if it doesn't, then it means something slipped in the tested repo and we
> should investigate ).
> >>
> >> But anyhow, your 'ci please build' also worked, and produced RPMs on
> the job, so you can still try it:
> >>
> >> Patch Set 26:
> >> Build Successful
> >>
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-fc25-x86_64/377/
> : SUCCESS
> >>
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/428/
> : SUCCESS
> >>
> >>
> >>>
> >>>
> >>> >
> >>> >>
> >>> >> Link to Job:
> >>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
> >>> >>
> >>> >> Link to logs:
> >>> >>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >>> >>
> >>> >> Error snippet from log:
> >>> >>
> >>> >> 
> >>> >>
> >>> >> 2017-09-07 08:21:48,657-04 INFO
> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock
> Acquired
> >>> >> to object
> >>> >>
> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
> >>> >> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]',
> sharedLocks='[]'}'
> >>> >> 2017-09-07 08:21:48,675-04 INFO
> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> Running
> >>> >> command: AddVmTemplateCommand internal: true. Entities affected :
> ID:
> >>> >> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
> >>> >> CREATE_TEMPLATE with role type USER
> >>> >> 2017-09-07 08:21:48,695-04 INFO
> >>> >>
> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> Running
> >>> >> command: CreateAllTemplateDisksCommand internal: true.
> >>> >> 2017-09-07 08:21:48,722-04 INFO
> >>> >> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> transaction
> >>> >> rolled back
> >>> >> 2017-09-07 08:21:48,722-04 ERROR
> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> Command
> >>> >> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
> >>> >> 2017-09-07 08:21:48,722-04 ERROR
> >>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> Exception:
> >>> >> java.lang.NullPointerException
> >>> >> at
> >>> >>
> 

Re: [ovirt-devel] can't create a VM on engine master

2017-09-11 Thread Roy Golan
https://gerrit.ovirt.org/#/c/81636/ should solve it. I made
MacPoolPerCluster @DependOn("Backend") which should make the config
available at init time

On Mon, 11 Sep 2017 at 21:08 Roy Golan  wrote:

> Maybe the MacPoolPerCluster EJB bean is started before the Backed bean,
> probably a backfire of Wildfly 11 move
>
> I'll try to make a dependency and see if it helps
>
> On Mon, 11 Sep 2017 at 18:40 Benny Zlotnik  wrote:
>
>> This error can be observed in server.log:
>> 2017-09-11 18:37:03,268+03 ERROR
>> [org.jboss.as.controller.management-operation] (Controller Boot Thread)
>> WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" =>
>> "engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" =>
>> {"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.MacPoolPerCluster.START"
>> => "java.lang.IllegalStateException: WFLYEE0042: Failed to construct
>> component instance
>> Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to
>> construct component instance
>> Caused by: javax.ejb.EJBException:
>> org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke
>> public void
>> org.ovirt.engine.core.bll.CpuFlagsManagerHandler.initDictionaries() on
>> org.ovirt.engine.core.bll.CpuFlagsManagerHandler@7f971888
>> Caused by: org.jboss.weld.exceptions.WeldException: WELD-49:
>> Unable to invoke public void
>> org.ovirt.engine.core.bll.CpuFlagsManagerHandler.initDictionaries() on
>> org.ovirt.engine.core.bll.CpuFlagsManagerHandler@7f971888
>> Caused by: java.lang.reflect.InvocationTargetException
>> Caused by: java.lang.NullPointerException"}}
>>
>>
>> On Mon, Sep 11, 2017 at 6:11 PM, Benny Zlotnik 
>> wrote:
>>
>>> Yes, reverting this patch[1] helped (there's a discussion about OST
>>> failing)
>>>
>>> [1] - https://gerrit.ovirt.org/#/c/80450/
>>>
>>> On Mon, Sep 11, 2017 at 6:05 PM, Greg Sheremeta 
>>> wrote:
>>>
 Hi,

 With a fresh engine (new path and new db) and a fresh f24 4.1 host, I'm
 unable to add a VM either via the UI or REST API. Both hang, and I see no
 helpful logs. Engine log just says it's running the command, and then
 nothing. I don't see anything in vdsm.log that suggests the command came
 through, so I think it's a problem in engine. Logs attached.

 Anyone else seeing this?

 Greg

 --

 GREG SHEREMETA

 SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

 Red Hat

 

 gsher...@redhat.comIRC: gshereme
 

 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

>>>
>>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Dan Kenigsberg
On Mon, Sep 11, 2017 at 4:46 PM, Dusan Fodor  wrote:
>
> Hello all,
> this same error still causes OST to fail.
>

We are aware of it, but do not understand it at all.

Anyway, a revert is proposed and https://gerrit.ovirt.org/#/c/81618/
would be merged once approved by CI.

>
> Jobs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471
>
> Logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>
>
> On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:
>>
>>
>>
>> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg  wrote:
>>>
>>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg  wrote:
>>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
>>> >  wrote:
>>> >> Test failed: [ import_template_from_glance ]
>>> >>
>>> >> Link to suspected patches:
>>> >> https://gerrit.ovirt.org/#/c/80450/
>>> >
>>> > Martin, this is "core: when initializing MacPool also register in it
>>> > nics in snapshots"
>>> > and the error seems somewhat related to it
>>>
>>> Gil, can you tell us which is the last Engine patch that passed OST? I
>>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
>>> that "ci please build" does not work on merged patches.
>>
>>
>> Not sure I follow, the latest working engine should be deployed to tested 
>> repo,
>> So if you'll run the manual job w/o any custom repos it should work ( if it 
>> doesn't, then it means something slipped in the tested repo and we should 
>> investigate ).
>>
>> But anyhow, your 'ci please build' also worked, and produced RPMs on the 
>> job, so you can still try it:
>>
>> Patch Set 26:
>> Build Successful
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-fc25-x86_64/377/
>>  : SUCCESS
>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/428/
>>  : SUCCESS
>>
>>
>>>
>>>
>>> >
>>> >>
>>> >> Link to Job:
>>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
>>> >>
>>> >> Link to logs:
>>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>>> >>
>>> >> Error snippet from log:
>>> >>
>>> >> 
>>> >>
>>> >> 2017-09-07 08:21:48,657-04 INFO
>>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock 
>>> >> Acquired
>>> >> to object
>>> >> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
>>> >> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', 
>>> >> sharedLocks='[]'}'
>>> >> 2017-09-07 08:21:48,675-04 INFO
>>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
>>> >> command: AddVmTemplateCommand internal: true. Entities affected :  ID:
>>> >> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
>>> >> CREATE_TEMPLATE with role type USER
>>> >> 2017-09-07 08:21:48,695-04 INFO
>>> >> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
>>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
>>> >> command: CreateAllTemplateDisksCommand internal: true.
>>> >> 2017-09-07 08:21:48,722-04 INFO
>>> >> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
>>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] 
>>> >> transaction
>>> >> rolled back
>>> >> 2017-09-07 08:21:48,722-04 ERROR
>>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
>>> >> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
>>> >> 2017-09-07 08:21:48,722-04 ERROR
>>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Exception:
>>> >> java.lang.NullPointerException
>>> >> at
>>> >> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getVramMultiplier(VgamemVideoSettings.java:63)
>>> >> [bll.jar:]
>>> >> at
>>> >> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getQxlVideoDeviceSettings(VgamemVideoSettings.java:40)
>>> >> [bll.jar:]
>>> >> at
>>> >> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSettings(VideoDeviceSettings.java:34)
>>> >> [bll.jar:]
>>> >> at
>>> >> 

Re: [ovirt-devel] can't create a VM on engine master

2017-09-11 Thread Benny Zlotnik
This error can be observed in server.log:
2017-09-11 18:37:03,268+03 ERROR
[org.jboss.as.controller.management-operation] (Controller Boot Thread)
WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" =>
"engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" =>
{"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.MacPoolPerCluster.START"
=> "java.lang.IllegalStateException: WFLYEE0042: Failed to construct
component instance
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to
construct component instance
Caused by: javax.ejb.EJBException:
org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke
public void
org.ovirt.engine.core.bll.CpuFlagsManagerHandler.initDictionaries() on
org.ovirt.engine.core.bll.CpuFlagsManagerHandler@7f971888
Caused by: org.jboss.weld.exceptions.WeldException: WELD-49: Unable
to invoke public void
org.ovirt.engine.core.bll.CpuFlagsManagerHandler.initDictionaries() on
org.ovirt.engine.core.bll.CpuFlagsManagerHandler@7f971888
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.lang.NullPointerException"}}


On Mon, Sep 11, 2017 at 6:11 PM, Benny Zlotnik  wrote:

> Yes, reverting this patch[1] helped (there's a discussion about OST
> failing)
>
> [1] - https://gerrit.ovirt.org/#/c/80450/
>
> On Mon, Sep 11, 2017 at 6:05 PM, Greg Sheremeta 
> wrote:
>
>> Hi,
>>
>> With a fresh engine (new path and new db) and a fresh f24 4.1 host, I'm
>> unable to add a VM either via the UI or REST API. Both hang, and I see no
>> helpful logs. Engine log just says it's running the command, and then
>> nothing. I don't see anything in vdsm.log that suggests the command came
>> through, so I think it's a problem in engine. Logs attached.
>>
>> Anyone else seeing this?
>>
>> Greg
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat
>>
>> 
>>
>> gsher...@redhat.comIRC: gshereme
>> 
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Announcing Yuval Turgeman as oVirt Node co-maintainer

2017-09-11 Thread Douglas Landgraf
On Mon, Sep 11, 2017 at 3:32 AM, Sandro Bonazzola 
wrote:

> Hi,
> Yuval Turgeman took a major role in the oVirt Node project and solved many
> non trivial issues[1] demonstrating a very good knowledge of the codebase
> and of the product.
> Yuval has already co-maintainer permissions on oVirt Node.
> I'd like to thank Yuval for his contribution and I hope he'll keep up the
> good work!
>
> [1] https://bugzilla.redhat.com/buglist.cgi?quicksearch=
> product%3Aovirt-node%20assignee%3Ayturge%20status%
> 3Amodified%2Con_qa%2Cverified%2Cclosed
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>


+1, Good work Yuval!


-- 
Cheers
Douglas
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] can't create a VM on engine master

2017-09-11 Thread Benny Zlotnik
Yes, reverting this patch[1] helped (there's a discussion about OST
failing)

[1] - https://gerrit.ovirt.org/#/c/80450/

On Mon, Sep 11, 2017 at 6:05 PM, Greg Sheremeta  wrote:

> Hi,
>
> With a fresh engine (new path and new db) and a fresh f24 4.1 host, I'm
> unable to add a VM either via the UI or REST API. Both hang, and I see no
> helpful logs. Engine log just says it's running the command, and then
> nothing. I don't see anything in vdsm.log that suggests the command came
> through, so I think it's a problem in engine. Logs attached.
>
> Anyone else seeing this?
>
> Greg
>
> --
>
> GREG SHEREMETA
>
> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>
> Red Hat
>
> 
>
> gsher...@redhat.comIRC: gshereme
> 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] can't create a VM on engine master

2017-09-11 Thread Greg Sheremeta
Hi,

With a fresh engine (new path and new db) and a fresh f24 4.1 host, I'm
unable to add a VM either via the UI or REST API. Both hang, and I see no
helpful logs. Engine log just says it's running the command, and then
nothing. I don't see anything in vdsm.log that suggests the command came
through, so I think it's a problem in engine. Logs attached.

Anyone else seeing this?

Greg

-- 

GREG SHEREMETA

SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX

Red Hat



gsher...@redhat.comIRC: gshereme

2017-09-11 10:01:15,566-04 INFO  [org.ovirt.engine.core.bll.AddVmCommand] (default task-45) [cc52fb64-146f-4332-b633-e0d7498810a1] Lock Acquired to object 'EngineLock:{exclusiveLocks='[vm1=VM_NAME]', sharedLocks='[----=TEMPLATE]'}'
2017-09-11 10:01:15,667-04 INFO  [org.ovirt.engine.core.bll.network.macpool.MacPoolUsingRanges] (default task-45) [] Initializing MacPoolUsingRanges:{id='58ca604b-017d-0374-0220-014e'}
2017-09-11 10:01:15,668-04 INFO  [org.ovirt.engine.core.bll.network.macpool.MacPoolUsingRanges] (default task-45) [] Finished initializing MacPoolUsingRanges:{id='58ca604b-017d-0374-0220-014e'}. Available MACs in pool: 1024
2017-09-11 10:01:15,668-04 INFO  [org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster] (default task-45) [] Successfully initialized
2017-09-11 10:01:15,733-04 INFO  [org.ovirt.engine.core.bll.AddVmCommand] (default task-45) [] Running command: AddVmCommand internal: false. Entities affected :  ID: 972ae666-fa02-4261-8598-84952633d857 Type: ClusterAction group CREATE_VM with role type USER,  ID: ---- Type: VmTemplateAction group CREATE_VM with role type USER
2017-09-11 10:00:00,689-0400 INFO  (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539)
2017-09-11 10:00:00,817-0400 INFO  (jsonrpc/7) [vdsm.api] START getTaskStatus(taskID=u'f4693807-8ce9-4936-973e-e1149e523341', spUUID=None, options=None) from=:::192.168.1.21,40048, task_id=af5aaa82-d6cd-4c32-bb66-12fcd66fc768 (api:46)
2017-09-11 10:00:00,817-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH getTaskStatus return={'taskStatus': {'code': 0, 'message': 'running job 1 of 1', 'taskState': 'running', 'taskResult': '', 'taskID': 'f4693807-8ce9-4936-973e-e1149e523341'}} from=:::192.168.1.21,40048, task_id=af5aaa82-d6cd-4c32-bb66-12fcd66fc768 (api:52)
2017-09-11 10:00:00,817-0400 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Task.getStatus succeeded in 0.01 seconds (__init__:539)
2017-09-11 10:00:02,331-0400 INFO  (tasks/0) [storage.SANLock] Host id for domain 4cf0bbe1-19aa-4617-8e96-3d624fc871a1 successfully acquired (id=1, async=False) (clusterlock:311)
2017-09-11 10:00:02,332-0400 INFO  (tasks/0) [storage.SANLock] Acquiring Lease(name='SDM', path=u'/rhev/data-center/mnt/192.168.1.10:_mnt_nas2_ovirt_ovirt-export-data-1/4cf0bbe1-19aa-4617-8e96-3d624fc871a1/dom_md/leases', offset=1048576) for host id 1 (clusterlock:381)
2017-09-11 10:00:02,463-0400 INFO  (tasks/0) [storage.SANLock] Successfully acquired Lease(name='SDM', path=u'/rhev/data-center/mnt/192.168.1.10:_mnt_nas2_ovirt_ovirt-export-data-1/4cf0bbe1-19aa-4617-8e96-3d624fc871a1/dom_md/leases', offset=1048576) for host id 1 (clusterlock:419)
2017-09-11 10:00:02,465-0400 INFO  (tasks/0) [IOProcessClient] Closing client ioprocess-1 (__init__:598)
2017-09-11 10:00:02,466-0400 INFO  (tasks/0) [IOProcessClient] Closing client ioprocess-0 (__init__:598)
2017-09-11 10:00:02,467-0400 INFO  (monitor/4cf0bbe) [storage.Monitor] Host id for domain 4cf0bbe1-19aa-4617-8e96-3d624fc871a1 successfully acquired (id: 1) (monitor:449)
2017-09-11 10:00:02,538-0400 INFO  (tasks/0) [storage.ThreadPool.WorkerThread] FINISH task f4693807-8ce9-4936-973e-e1149e523341 (threadPool:210)
2017-09-11 10:00:02,825-0400 INFO  (jsonrpc/6) [vdsm.api] START getTaskStatus(taskID=u'f4693807-8ce9-4936-973e-e1149e523341', spUUID=None, options=None) from=:::192.168.1.21,40048, task_id=acb21a48-f29f-4183-88b9-6a860c547710 (api:46)
2017-09-11 10:00:02,826-0400 INFO  (jsonrpc/6) [vdsm.api] FINISH getTaskStatus return={'taskStatus': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': 'f4693807-8ce9-4936-973e-e1149e523341'}} from=:::192.168.1.21,40048, task_id=acb21a48-f29f-4183-88b9-6a860c547710 (api:52)
2017-09-11 10:00:02,826-0400 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Task.getStatus succeeded in 0.00 seconds (__init__:539)
2017-09-11 10:00:03,834-0400 INFO  (jsonrpc/1) [vdsm.api] START getSpmStatus(spUUID=u'3fb4ebd6-e75c-4f33-8360-60aa3ee06b1c', options=None) from=:::192.168.1.21,40048, task_id=8fcd2204-19c0-46e1-98b3-7e2157fdaf4a (api:46)
2017-09-11 10:00:03,838-0400 INFO  (jsonrpc/1) [vdsm.api] FINISH getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 4L}} from=:::192.168.1.21,40048, 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Dusan Fodor
Hello all,
this same error still causes OST to fail.


Jobs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466

http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471

Logs:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2466/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log

http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2471/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log


On Mon, Sep 11, 2017 at 1:22 PM, Eyal Edri  wrote:

>
>
> On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg  wrote:
>
>> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg 
>> wrote:
>> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
>> >  wrote:
>> >> Test failed: [ import_template_from_glance ]
>> >>
>> >> Link to suspected patches:
>> >> https://gerrit.ovirt.org/#/c/80450/
>> >
>> > Martin, this is "core: when initializing MacPool also register in it
>> > nics in snapshots"
>> > and the error seems somewhat related to it
>>
>> Gil, can you tell us which is the last Engine patch that passed OST? I
>> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
>> that "ci please build" does not work on merged patches.
>>
>
> Not sure I follow, the latest working engine should be deployed to tested
> repo,
> So if you'll run the manual job w/o any custom repos it should work ( if
> it doesn't, then it means something slipped in the tested repo and we
> should investigate ).
>
> But anyhow, your 'ci please build' also worked, and produced RPMs on the
> job, so you can still try it:
>
> Patch Set 26:
> Build Successful
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
> artifacts-on-demand-fc25-x86_64/377/ : SUCCESS
> http://jenkins.ovirt.org/job/ovirt-engine_master_build-
> artifacts-on-demand-el7-x86_64/428/ : SUCCESS
>
>
>
>>
>> >
>> >>
>> >> Link to Job:
>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
>> >>
>> >> Link to logs:
>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>> r/2397/artifact/exported-artifacts/basic-suit-master-el7/
>> test_logs/basic-suite-master/post-002_bootstrap.py/lago-
>> basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>> >>
>> >> Error snippet from log:
>> >>
>> >> 
>> >>
>> >> 2017-09-07 08:21:48,657-04 INFO
>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock
>> Acquired
>> >> to object
>> >> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-974430
>> 9ca8c1=TEMPLATE,
>> >> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]',
>> sharedLocks='[]'}'
>> >> 2017-09-07 08:21:48,675-04 INFO
>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
>> >> command: AddVmTemplateCommand internal: true. Entities affected :  ID:
>> >> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
>> >> CREATE_TEMPLATE with role type USER
>> >> 2017-09-07 08:21:48,695-04 INFO
>> >> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
>> >> command: CreateAllTemplateDisksCommand internal: true.
>> >> 2017-09-07 08:21:48,722-04 INFO
>> >> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> transaction
>> >> rolled back
>> >> 2017-09-07 08:21:48,722-04 ERROR
>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
>> >> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
>> >> 2017-09-07 08:21:48,722-04 ERROR
>> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
>> Exception:
>> >> java.lang.NullPointerException
>> >> at
>> >> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getVramM
>> ultiplier(VgamemVideoSettings.java:63)
>> >> [bll.jar:]
>> >> at
>> >> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getQxlVi
>> deoDeviceSettings(VgamemVideoSettings.java:40)
>> >> [bll.jar:]
>> >> at
>> >> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideo
>> DeviceSettings(VideoDeviceSettings.java:34)
>> >> [bll.jar:]
>> >> at
>> >> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideo
>> DeviceSpecParams(VideoDeviceSettings.java:49)
>> >> [bll.jar:]
>> >> at
>> >> org.ovirt.engine.core.bll.utils.VmDeviceUtils.getVideoDevice
>> SpecParams(VmDeviceUtils.java:586)
>> >> [bll.jar:]
>> >> at
>> >> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(
>> VmDeviceUtils.java:1435)
>> >> 

Re: [ovirt-devel] [review][vdsm] please review https://gerrit.ovirt.org/#/q/topic:drivemonitor_event+status:open

2017-09-11 Thread Francesco Romani

On 09/11/2017 01:16 PM, Eyal Edri wrote:
>
>
> On Mon, Sep 11, 2017 at 2:02 PM, Francesco Romani  > wrote:
>
> Hi everyone,
>
>
> https://gerrit.ovirt.org/#/q/topic:drivemonitor_event+status:open
>  is
> ready for review. It is the first part of the series needed
>
> to consume the BLOCK_THRESHOLD event available with libvirt >=
> 3.2.0 and
> QEMU >= 2.3.0.
>
> Once completed, this patchset will allow Vdsm to avoid polling, thus
> greatly improving the system performance and
>
> eventually close
> https://bugzilla.redhat.com/show_bug.cgi?id=1181665
> 
>
>
> Please note that:
>
> 1. CI fails because the workers are not yet updated to CentOS 7.4 (not
> yet released AFAIK!) which will provide libvirt >= 3.2.0.
>
>
> You probably know that already, but just to be sure, please wait for
> official CentOS 7.4 be out and that we'll verify OST works well with it
> before merging, otherwise any patch that will be merged afterwards
> will fail and CI won't work.
>
> AFAIK, it should be out this week.
>  
>

Sure thing. Will not merge before OST and CI both pass. But it is
totally reviewable while we wait! :)

Bests,

-- 
Francesco Romani
Senior SW Eng., Virtualization R
Red Hat
IRC: fromani github: @fromanirh

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Announcing Yuval Turgeman as oVirt Node co-maintainer

2017-09-11 Thread Simone Tiraboschi
+1, well deserved!

On Mon, Sep 11, 2017 at 9:32 AM, Sandro Bonazzola 
wrote:

> Hi,
> Yuval Turgeman took a major role in the oVirt Node project and solved many
> non trivial issues[1] demonstrating a very good knowledge of the codebase
> and of the product.
> Yuval has already co-maintainer permissions on oVirt Node.
> I'd like to thank Yuval for his contribution and I hope he'll keep up the
> good work!
>
> [1] https://bugzilla.redhat.com/buglist.cgi?quicksearch=
> product%3Aovirt-node%20assignee%3Ayturge%20status%
> 3Amodified%2Con_qa%2Cverified%2Cclosed
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Announcing Yuval Turgeman as oVirt Node co-maintainer

2017-09-11 Thread Eyal Edri
+1, Keep up the good work!

On Mon, Sep 11, 2017 at 10:32 AM, Sandro Bonazzola 
wrote:

> Hi,
> Yuval Turgeman took a major role in the oVirt Node project and solved many
> non trivial issues[1] demonstrating a very good knowledge of the codebase
> and of the product.
> Yuval has already co-maintainer permissions on oVirt Node.
> I'd like to thank Yuval for his contribution and I hope he'll keep up the
> good work!
>
> [1] https://bugzilla.redhat.com/buglist.cgi?quicksearch=
> product%3Aovirt-node%20assignee%3Ayturge%20status%
> 3Amodified%2Con_qa%2Cverified%2Cclosed
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Eyal Edri
On Mon, Sep 11, 2017 at 1:21 PM, Dan Kenigsberg  wrote:

> On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg  wrote:
> > On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
> >  wrote:
> >> Test failed: [ import_template_from_glance ]
> >>
> >> Link to suspected patches:
> >> https://gerrit.ovirt.org/#/c/80450/
> >
> > Martin, this is "core: when initializing MacPool also register in it
> > nics in snapshots"
> > and the error seems somewhat related to it
>
> Gil, can you tell us which is the last Engine patch that passed OST? I
> tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
> that "ci please build" does not work on merged patches.
>

Not sure I follow, the latest working engine should be deployed to tested
repo,
So if you'll run the manual job w/o any custom repos it should work ( if it
doesn't, then it means something slipped in the tested repo and we should
investigate ).

But anyhow, your 'ci please build' also worked, and produced RPMs on the
job, so you can still try it:

Patch Set 26:
Build Successful
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-fc25-x86_64/377/
: SUCCESS
http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-on-demand-el7-x86_64/428/
: SUCCESS



>
> >
> >>
> >> Link to Job:
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
> >>
> >> Link to logs:
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/2397/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> >>
> >> Error snippet from log:
> >>
> >> 
> >>
> >> 2017-09-07 08:21:48,657-04 INFO
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock
> Acquired
> >> to object
> >> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-
> 9744309ca8c1=TEMPLATE,
> >> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]',
> sharedLocks='[]'}'
> >> 2017-09-07 08:21:48,675-04 INFO
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
> >> command: AddVmTemplateCommand internal: true. Entities affected :  ID:
> >> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
> >> CREATE_TEMPLATE with role type USER
> >> 2017-09-07 08:21:48,695-04 INFO
> >> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
> >> command: CreateAllTemplateDisksCommand internal: true.
> >> 2017-09-07 08:21:48,722-04 INFO
> >> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> transaction
> >> rolled back
> >> 2017-09-07 08:21:48,722-04 ERROR
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
> >> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
> >> 2017-09-07 08:21:48,722-04 ERROR
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> Exception:
> >> java.lang.NullPointerException
> >> at
> >> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getVramMultiplier(
> VgamemVideoSettings.java:63)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.
> getQxlVideoDeviceSettings(VgamemVideoSettings.java:40)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.
> getVideoDeviceSettings(VideoDeviceSettings.java:34)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.
> getVideoDeviceSpecParams(VideoDeviceSettings.java:49)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.bll.utils.VmDeviceUtils.getVideoDeviceSpecParams(
> VmDeviceUtils.java:586)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.bll.utils.VmDeviceUtils.
> copyVmDevices(VmDeviceUtils.java:1435)
> >> [bll.jar:]
> >> at
> >> org.ovirt.engine.core.bll.utils.VmDeviceUtils.
> copyVmDevices(VmDeviceUtils.java:1559)
> >> [bll.jar:]
> >>
> >> ...
> >>
> >> 2017-09-07 08:21:48,739-04 INFO
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
> >> [id=3d633182-982a-46f2-b4f1-27c5d04397a7]: Compensating NEW_ENTITY_ID
> of
> >> org.ovirt.engine.core.common.businessentities.VmTemplate; snapshot:
> >> 2c2b56b5-cac6-469d-b0e0-9744309ca8c1.
> >> 2017-09-07 08:21:48,813-04 ERROR
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb]
> EVENT_ID:
> >> USER_ADD_VM_TEMPLATE_FAILURE(36), Failed creating Template
> >> CirrOS_0.3.5_for_x86_64_glance_template.
> >> 2017-09-07 08:21:48,834-04 INFO
> >> 

Re: [ovirt-devel] [review][vdsm] please review https://gerrit.ovirt.org/#/q/topic:drivemonitor_event+status:open

2017-09-11 Thread Eyal Edri
On Mon, Sep 11, 2017 at 2:02 PM, Francesco Romani 
wrote:

> Hi everyone,
>
>
> https://gerrit.ovirt.org/#/q/topic:drivemonitor_event+status:open is
> ready for review. It is the first part of the series needed
>
> to consume the BLOCK_THRESHOLD event available with libvirt >= 3.2.0 and
> QEMU >= 2.3.0.
>
> Once completed, this patchset will allow Vdsm to avoid polling, thus
> greatly improving the system performance and
>
> eventually close https://bugzilla.redhat.com/show_bug.cgi?id=1181665
>
>
> Please note that:
>
> 1. CI fails because the workers are not yet updated to CentOS 7.4 (not
> yet released AFAIK!) which will provide libvirt >= 3.2.0.
>

You probably know that already, but just to be sure, please wait for
official CentOS 7.4 be out and that we'll verify OST works well with it
before merging, otherwise any patch that will be merged afterwards will
fail and CI won't work.

AFAIK, it should be out this week.


>
> 2. Few more simple patches will be needed to enable/disable monitoring
> in specific flows where we cannot use events (e.g. LSM)
>
> 3. I did initial verification successfully, installing fedora 25 on thin
> provisioned disk without issue.
>
>
> --
> Francesco Romani
> Senior SW Eng., Virtualization R
> Red Hat
> IRC: fromani github: @fromanirh
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>


-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] [review][vdsm] please review https://gerrit.ovirt.org/#/q/topic:drivemonitor_event+status:open

2017-09-11 Thread Francesco Romani
Hi everyone,


https://gerrit.ovirt.org/#/q/topic:drivemonitor_event+status:open is
ready for review. It is the first part of the series needed

to consume the BLOCK_THRESHOLD event available with libvirt >= 3.2.0 and
QEMU >= 2.3.0.

Once completed, this patchset will allow Vdsm to avoid polling, thus
greatly improving the system performance and

eventually close https://bugzilla.redhat.com/show_bug.cgi?id=1181665


Please note that:

1. CI fails because the workers are not yet updated to CentOS 7.4 (not
yet released AFAIK!) which will provide libvirt >= 3.2.0.

2. Few more simple patches will be needed to enable/disable monitoring
in specific flows where we cannot use events (e.g. LSM)

3. I did initial verification successfully, installing fedora 25 on thin
provisioned disk without issue.


-- 
Francesco Romani
Senior SW Eng., Virtualization R
Red Hat
IRC: fromani github: @fromanirh

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Dan Kenigsberg
On Mon, Sep 11, 2017 at 8:59 AM, Dan Kenigsberg  wrote:
> On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
>  wrote:
>> Test failed: [ import_template_from_glance ]
>>
>> Link to suspected patches:
>> https://gerrit.ovirt.org/#/c/80450/
>
> Martin, this is "core: when initializing MacPool also register in it
> nics in snapshots"
> and the error seems somewhat related to it

Gil, can you tell us which is the last Engine patch that passed OST? I
tried to build https://gerrit.ovirt.org/#/c/76309/ but I'm guessing
that "ci please build" does not work on merged patches.

>
>>
>> Link to Job:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
>>
>> Link to logs:
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>>
>> Error snippet from log:
>>
>> 
>>
>> 2017-09-07 08:21:48,657-04 INFO
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock Acquired
>> to object
>> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
>> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', sharedLocks='[]'}'
>> 2017-09-07 08:21:48,675-04 INFO
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
>> command: AddVmTemplateCommand internal: true. Entities affected :  ID:
>> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
>> CREATE_TEMPLATE with role type USER
>> 2017-09-07 08:21:48,695-04 INFO
>> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
>> command: CreateAllTemplateDisksCommand internal: true.
>> 2017-09-07 08:21:48,722-04 INFO
>> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] transaction
>> rolled back
>> 2017-09-07 08:21:48,722-04 ERROR
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
>> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
>> 2017-09-07 08:21:48,722-04 ERROR
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Exception:
>> java.lang.NullPointerException
>> at
>> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getVramMultiplier(VgamemVideoSettings.java:63)
>> [bll.jar:]
>> at
>> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getQxlVideoDeviceSettings(VgamemVideoSettings.java:40)
>> [bll.jar:]
>> at
>> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSettings(VideoDeviceSettings.java:34)
>> [bll.jar:]
>> at
>> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSpecParams(VideoDeviceSettings.java:49)
>> [bll.jar:]
>> at
>> org.ovirt.engine.core.bll.utils.VmDeviceUtils.getVideoDeviceSpecParams(VmDeviceUtils.java:586)
>> [bll.jar:]
>> at
>> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(VmDeviceUtils.java:1435)
>> [bll.jar:]
>> at
>> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(VmDeviceUtils.java:1559)
>> [bll.jar:]
>>
>> ...
>>
>> 2017-09-07 08:21:48,739-04 INFO
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
>> [id=3d633182-982a-46f2-b4f1-27c5d04397a7]: Compensating NEW_ENTITY_ID of
>> org.ovirt.engine.core.common.businessentities.VmTemplate; snapshot:
>> 2c2b56b5-cac6-469d-b0e0-9744309ca8c1.
>> 2017-09-07 08:21:48,813-04 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] EVENT_ID:
>> USER_ADD_VM_TEMPLATE_FAILURE(36), Failed creating Template
>> CirrOS_0.3.5_for_x86_64_glance_template.
>> 2017-09-07 08:21:48,834-04 INFO
>> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock freed to
>> object
>> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
>> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', sharedLocks='[]'}'
>> 2017-09-07 08:21:48,848-04 DEBUG
>> [org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Translating
>> SQLException with SQL state '23503', error code '0', message [ERROR: insert
>> or update on table "disk_vm_element" violates foreign key constraint
>> "fk_disk_vm_element_vm_static"
>>   Detail: Key (vm_id)=(2c2b56b5-cac6-469d-b0e0-9744309ca8c1) is not present
>> in table "vm_static".
>>   Where: SQL statement "INSERT INTO disk_vm_element (
>> disk_id,
>> vm_id,
>> is_boot,
>> 

Re: [ovirt-devel] [ovirt-users] Mailing-Lists upgrade

2017-09-11 Thread Duck
Quack,

In short, it did work out well and I had to rollback because of a nasty
bug. A few mails were lost, mostly automated, sorry about this.


There is a bug in the LMTP communications between Postfix and Mailman 3,
which probably lies in the LMTP library (aiosmtpd) used by MM3. They
dropped Python 3.4 support so that was not very practical. I backported
patches which seemed relevant, experimented too, but I was not able to
fix the bug.

I will need more time to dig into the problem and in the meanwhile I
decided to rollback to the previous server so that people can work. We
can reattempt a sync+migration when this is ready.

Sorry about the disturbance.
\_o<



signature.asc
Description: OpenPGP digital signature
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Announcing Yuval Turgeman as oVirt Node co-maintainer

2017-09-11 Thread Sandro Bonazzola
Hi,
Yuval Turgeman took a major role in the oVirt Node project and solved many
non trivial issues[1] demonstrating a very good knowledge of the codebase
and of the product.
Yuval has already co-maintainer permissions on oVirt Node.
I'd like to thank Yuval for his contribution and I hope he'll keep up the
good work!

[1]
https://bugzilla.redhat.com/buglist.cgi?quicksearch=product%3Aovirt-node%20assignee%3Ayturge%20status%3Amodified%2Con_qa%2Cverified%2Cclosed

-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Gil Shinar
Any news concerning this issue? It blocks our OST.

On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin 
wrote:

> Test failed: [ import_template_from_glance ]
>
> Link to suspected patches:
> https://gerrit.ovirt.org/#/c/80450/
>
> Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
>
> Link to logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/2397/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-002_bootstrap.py/
> lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>
> Error snippet from log:
>
> 
>
> 2017-09-07 08:21:48,657-04 INFO  
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock Acquired 
> to object 
> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE, 
> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', sharedLocks='[]'}'
> 2017-09-07 08:21:48,675-04 INFO  
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running 
> command: AddVmTemplateCommand internal: true. Entities affected :  ID: 
> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group 
> CREATE_TEMPLATE with role type USER
> 2017-09-07 08:21:48,695-04 INFO  
> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running 
> command: CreateAllTemplateDisksCommand internal: true.
> 2017-09-07 08:21:48,722-04 INFO  
> [org.ovirt.engine.core.utils.transaction.TransactionSupport] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] transaction 
> rolled back
> 2017-09-07 08:21:48,722-04 ERROR 
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command 
> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
> 2017-09-07 08:21:48,722-04 ERROR 
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Exception: 
> java.lang.NullPointerException
>   at 
> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getVramMultiplier(VgamemVideoSettings.java:63)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getQxlVideoDeviceSettings(VgamemVideoSettings.java:40)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSettings(VideoDeviceSettings.java:34)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSpecParams(VideoDeviceSettings.java:49)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.utils.VmDeviceUtils.getVideoDeviceSpecParams(VmDeviceUtils.java:586)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(VmDeviceUtils.java:1435)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(VmDeviceUtils.java:1559)
>  [bll.jar:]
>
> ...
>
> 2017-09-07 08:21:48,739-04 INFO  
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command 
> [id=3d633182-982a-46f2-b4f1-27c5d04397a7]: Compensating NEW_ENTITY_ID of 
> org.ovirt.engine.core.common.businessentities.VmTemplate; snapshot: 
> 2c2b56b5-cac6-469d-b0e0-9744309ca8c1.
> 2017-09-07 08:21:48,813-04 ERROR 
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] EVENT_ID: 
> USER_ADD_VM_TEMPLATE_FAILURE(36), Failed creating Template 
> CirrOS_0.3.5_for_x86_64_glance_template.
> 2017-09-07 08:21:48,834-04 INFO  
> [org.ovirt.engine.core.bll.AddVmTemplateCommand] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock freed to 
> object 
> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE, 
> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', sharedLocks='[]'}'
> 2017-09-07 08:21:48,848-04 DEBUG 
> [org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator] 
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Translating 
> SQLException with SQL state '23503', error code '0', message [ERROR: insert 
> or update on table "disk_vm_element" violates foreign key constraint 
> "fk_disk_vm_element_vm_static"
>   Detail: Key (vm_id)=(2c2b56b5-cac6-469d-b0e0-9744309ca8c1) is not present 
> in table "vm_static".
>   Where: SQL statement "INSERT INTO disk_vm_element (
> disk_id,
> vm_id,
> is_boot,
> pass_discard,
> disk_interface,
> is_using_scsi_reservation)
> VALUES (
> v_disk_id,
> v_vm_id,
> v_is_boot,
> v_pass_discard,
> v_disk_interface,
> v_is_using_scsi_reservation)"
> PL/pgSQL function insertdiskvmelement(uuid,uuid,boolean,boolean,character 
> varying,boolean) line 3 at SQL 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 2017-09-07 ] [import_template_from_glance]

2017-09-11 Thread Dan Kenigsberg
On Thu, Sep 7, 2017 at 6:12 PM, Evgheni Dereveanchin
 wrote:
> Test failed: [ import_template_from_glance ]
>
> Link to suspected patches:
> https://gerrit.ovirt.org/#/c/80450/

Martin, this is "core: when initializing MacPool also register in it
nics in snapshots"
and the error seems somewhat related to it

>
> Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/
>
> Link to logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2397/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
>
> Error snippet from log:
>
> 
>
> 2017-09-07 08:21:48,657-04 INFO
> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock Acquired
> to object
> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', sharedLocks='[]'}'
> 2017-09-07 08:21:48,675-04 INFO
> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
> command: AddVmTemplateCommand internal: true. Entities affected :  ID:
> d279c4e9-09e7-4dd9-9eff-10b31ee2adfc Type: StoragePoolAction group
> CREATE_TEMPLATE with role type USER
> 2017-09-07 08:21:48,695-04 INFO
> [org.ovirt.engine.core.bll.storage.disk.CreateAllTemplateDisksCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Running
> command: CreateAllTemplateDisksCommand internal: true.
> 2017-09-07 08:21:48,722-04 INFO
> [org.ovirt.engine.core.utils.transaction.TransactionSupport]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] transaction
> rolled back
> 2017-09-07 08:21:48,722-04 ERROR
> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
> 'org.ovirt.engine.core.bll.AddVmTemplateCommand' failed: null
> 2017-09-07 08:21:48,722-04 ERROR
> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Exception:
> java.lang.NullPointerException
> at
> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getVramMultiplier(VgamemVideoSettings.java:63)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.utils.VgamemVideoSettings.getQxlVideoDeviceSettings(VgamemVideoSettings.java:40)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSettings(VideoDeviceSettings.java:34)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.utils.VideoDeviceSettings.getVideoDeviceSpecParams(VideoDeviceSettings.java:49)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.utils.VmDeviceUtils.getVideoDeviceSpecParams(VmDeviceUtils.java:586)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(VmDeviceUtils.java:1435)
> [bll.jar:]
> at
> org.ovirt.engine.core.bll.utils.VmDeviceUtils.copyVmDevices(VmDeviceUtils.java:1559)
> [bll.jar:]
>
> ...
>
> 2017-09-07 08:21:48,739-04 INFO
> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Command
> [id=3d633182-982a-46f2-b4f1-27c5d04397a7]: Compensating NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.VmTemplate; snapshot:
> 2c2b56b5-cac6-469d-b0e0-9744309ca8c1.
> 2017-09-07 08:21:48,813-04 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] EVENT_ID:
> USER_ADD_VM_TEMPLATE_FAILURE(36), Failed creating Template
> CirrOS_0.3.5_for_x86_64_glance_template.
> 2017-09-07 08:21:48,834-04 INFO
> [org.ovirt.engine.core.bll.AddVmTemplateCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Lock freed to
> object
> 'EngineLock:{exclusiveLocks='[2c2b56b5-cac6-469d-b0e0-9744309ca8c1=TEMPLATE,
> CirrOS_0.3.5_for_x86_64_glance_template=TEMPLATE_NAME]', sharedLocks='[]'}'
> 2017-09-07 08:21:48,848-04 DEBUG
> [org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator]
> (EE-ManagedThreadFactory-engineScheduled-Thread-21) [74b41edb] Translating
> SQLException with SQL state '23503', error code '0', message [ERROR: insert
> or update on table "disk_vm_element" violates foreign key constraint
> "fk_disk_vm_element_vm_static"
>   Detail: Key (vm_id)=(2c2b56b5-cac6-469d-b0e0-9744309ca8c1) is not present
> in table "vm_static".
>   Where: SQL statement "INSERT INTO disk_vm_element (
> disk_id,
> vm_id,
> is_boot,
> pass_discard,
> disk_interface,
> is_using_scsi_reservation)
> VALUES (
> v_disk_id,
> v_vm_id,
> v_is_boot,
> v_pass_discard,
> v_disk_interface,
> v_is_using_scsi_reservation)"
> PL/pgSQL function insertdiskvmelement(uuid,uuid,boolean,boolean,character
> varying,boolean) line 3 at SQL statement]; SQL was [{call
>