Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Doron Fediuck
On Thu, Dec 22, 2016 at 9:48 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 21 Dec 2016, at 20:52, Eyal Edri  wrote:
>
> Not as easy at it sounds,   the current flow of ost is much more
> complicated than just build artifacts. I'm not saving we won't do it,  but
> it won't be ready tomorrow and we designed lago exactly for being
> independent of any ci system so anyone can run it on their laptop.
>
>
> +1
> we just need to improve a little bit to make it possible for everyone.
> Almost there.
>
>  So maybe using Jenkins might be simpler but people shouldn't skip
> verification just because such a job doesn't exist yet.
>
> On Dec 21, 2016 9:02 PM, "Oved Ourfali"  wrote:
>
>> Why not run it via Jenkins for patches?
>> Like, if you add a comment saying "run: ost" it will run it?
>>
>
> please don’t add more. Even the Rerun-hooks thing is hard to remember.
> Even after years I always keep asking is it "Re-run hooks” or “Rerun-Hooks”
> or “rerun hooks”, does case matter…..bleh
> Either add the button or maybe add it as a link in the automated comments
> (that would actually work good enough)
>
> Thanks,
> michal
>
> It do it automatically based on another thing?
>>
>> On Dec 21, 2016 17:42, "Eyal Edri"  wrote:
>> >
>> >
>> >
>> > On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> >>
>> >>
>> >>> On 21 Dec 2016, at 16:25, Yaniv Kaul  wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> 
>> 
>> > On 21 Dec 2016, at 14:56, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> >
>> >
>> >> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
>> >>
>> >>
>> >>
>> >> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra <
>> vfeen...@redhat.com> wrote:
>> >>>
>> >>>
>>  On Dec 21, 2016, at 11:17 AM, Barak Korren 
>> wrote:
>> 
>>  The test for running VMs had been failing since yesterday.
>> 
>>  The patch merged before the failures started was:
>>  https://gerrit.ovirt.org/#/c/68826/
>> >>>
>> >>>
>> >>>
>> >>>
>> 
>>  The error we`re seeing is a time-out (after two minutes) while
>> running
>>  this API call:
>> 
>>  api.vms.get(VM0_NAME).status.state == ‘up'
>> >>>
>> >>>
>> >>> This is a REST API call, the patch above is Frontend. So this is
>> unrelated.
>> >>>
>> >>> However on Host 0 I can see this:
>> >>>
>> >>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm]
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process
>> failed (vm:615) Traceback (most recent call last): File
>> "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm self._run()
>> File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>> self._connection.createXML(domxml, flags), File
>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123,
>> in wrapper ret = f(*args, **kwargs) File 
>> "/usr/lib/python2.7/site-packages/vdsm/utils.py",
>> line 941, in wrapper return func(inst, *args, **kwargs) File
>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self) libvirtError: internal error: process exited while connecting to
>> monitor: 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present
>> in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus
>> should be described in NUMA config 2016-12-20T21:54:43.101886Z qemu-kvm:
>> -device usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch
>> trying to attach usb device "QEMU USB CCID" (full speed) to bus "usb.0",
>> port "1" (high speed)
>> >
>> >
>> > it is likely related to the recent USB patches
>> > investigating
>> 
>> 
>>  hm, there are multiple problems (features/bugs depending on
>> prefferred point of view:)
>>  but there is an easy “fix” taking care of this particular problem,
>> so we can start with that and figure out the proper approach later
>>  arik will push that and merge it soon, likely today
>> >>>
>> >>>
>> >>> Thanks - if there is a quicker way to resolve this by reverting, I
>> think it's a better option.
>> >>
>> >>
>> >> I really need to talk you out of this approach:-)
>> >> It does sound tempting and logical, but with our development model of
>> large patch series combined with late detection it really is quite risky.
>> Here it wouldn’t help much…and figuring out the right revert patch is more
>> complicated then fixing it.
>> >
>> >
>> > Can we start asking developers run OST before they merge so it will be
>> early detection and not late detection?
>> > We have video sessions on how to 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Oved Ourfali
If we make it mandatory then it should happen automatically in Jenkins.
If we rely on engineers to run it then it means they won't always do that.



On Thu, Dec 22, 2016 at 9:48 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 21 Dec 2016, at 20:52, Eyal Edri  wrote:
>
> Not as easy at it sounds,   the current flow of ost is much more
> complicated than just build artifacts. I'm not saving we won't do it,  but
> it won't be ready tomorrow and we designed lago exactly for being
> independent of any ci system so anyone can run it on their laptop.
>
>
> +1
> we just need to improve a little bit to make it possible for everyone.
> Almost there.
>
>  So maybe using Jenkins might be simpler but people shouldn't skip
> verification just because such a job doesn't exist yet.
>
> On Dec 21, 2016 9:02 PM, "Oved Ourfali"  wrote:
>
>> Why not run it via Jenkins for patches?
>> Like, if you add a comment saying "run: ost" it will run it?
>>
>
> please don’t add more. Even the Rerun-hooks thing is hard to remember.
> Even after years I always keep asking is it "Re-run hooks” or “Rerun-Hooks”
> or “rerun hooks”, does case matter…..bleh
> Either add the button or maybe add it as a link in the automated comments
> (that would actually work good enough)
>
> Thanks,
> michal
>
> It do it automatically based on another thing?
>>
>> On Dec 21, 2016 17:42, "Eyal Edri"  wrote:
>> >
>> >
>> >
>> > On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> >>
>> >>
>> >>> On 21 Dec 2016, at 16:25, Yaniv Kaul  wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> 
>> 
>> > On 21 Dec 2016, at 14:56, Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> >
>> >
>> >> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
>> >>
>> >>
>> >>
>> >> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra <
>> vfeen...@redhat.com> wrote:
>> >>>
>> >>>
>>  On Dec 21, 2016, at 11:17 AM, Barak Korren 
>> wrote:
>> 
>>  The test for running VMs had been failing since yesterday.
>> 
>>  The patch merged before the failures started was:
>>  https://gerrit.ovirt.org/#/c/68826/
>> >>>
>> >>>
>> >>>
>> >>>
>> 
>>  The error we`re seeing is a time-out (after two minutes) while
>> running
>>  this API call:
>> 
>>  api.vms.get(VM0_NAME).status.state == ‘up'
>> >>>
>> >>>
>> >>> This is a REST API call, the patch above is Frontend. So this is
>> unrelated.
>> >>>
>> >>> However on Host 0 I can see this:
>> >>>
>> >>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm]
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process
>> failed (vm:615) Traceback (most recent call last): File
>> "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm self._run()
>> File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>> self._connection.createXML(domxml, flags), File
>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123,
>> in wrapper ret = f(*args, **kwargs) File 
>> "/usr/lib/python2.7/site-packages/vdsm/utils.py",
>> line 941, in wrapper return func(inst, *args, **kwargs) File
>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self) libvirtError: internal error: process exited while connecting to
>> monitor: 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present
>> in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus
>> should be described in NUMA config 2016-12-20T21:54:43.101886Z qemu-kvm:
>> -device usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch
>> trying to attach usb device "QEMU USB CCID" (full speed) to bus "usb.0",
>> port "1" (high speed)
>> >
>> >
>> > it is likely related to the recent USB patches
>> > investigating
>> 
>> 
>>  hm, there are multiple problems (features/bugs depending on
>> prefferred point of view:)
>>  but there is an easy “fix” taking care of this particular problem,
>> so we can start with that and figure out the proper approach later
>>  arik will push that and merge it soon, likely today
>> >>>
>> >>>
>> >>> Thanks - if there is a quicker way to resolve this by reverting, I
>> think it's a better option.
>> >>
>> >>
>> >> I really need to talk you out of this approach:-)
>> >> It does sound tempting and logical, but with our development model of
>> large patch series combined with late detection it really is quite risky.
>> Here it wouldn’t help much…and figuring out the right revert patch is more
>> complicated then fixing it.
>> >
>> >
>> > Can 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Michal Skrivanek

> On 21 Dec 2016, at 20:52, Eyal Edri  wrote:
> 
> Not as easy at it sounds,   the current flow of ost is much more complicated 
> than just build artifacts. I'm not saving we won't do it,  but it won't be 
> ready tomorrow and we designed lago exactly for being independent of any ci 
> system so anyone can run it on their laptop. 

+1
we just need to improve a little bit to make it possible for everyone. Almost 
there.

>  So maybe using Jenkins might be simpler but people shouldn't skip 
> verification just because such a job doesn't exist yet. 
> 
> On Dec 21, 2016 9:02 PM, "Oved Ourfali"  > wrote:
> Why not run it via Jenkins for patches? 
> Like, if you add a comment saying "run: ost" it will run it? 
> 

please don’t add more. Even the Rerun-hooks thing is hard to remember. Even 
after years I always keep asking is it "Re-run hooks” or “Rerun-Hooks” or 
“rerun hooks”, does case matter…..bleh
Either add the button or maybe add it as a link in the automated comments (that 
would actually work good enough)

Thanks,
michal
> It do it automatically based on another thing? 
> 
> On Dec 21, 2016 17:42, "Eyal Edri"  > wrote:
> >
> >
> >
> > On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek 
> > > wrote:
> >>
> >>
> >>> On 21 Dec 2016, at 16:25, Yaniv Kaul  >>> > wrote:
> >>>
> >>>
> >>>
> >>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek 
> >>> > wrote:
> 
> 
> > On 21 Dec 2016, at 14:56, Michal Skrivanek  > > wrote:
> >
> >
> >> On 21 Dec 2016, at 12:19, Eyal Edri  >> > wrote:
> >>
> >>
> >>
> >> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra 
> >> > wrote:
> >>>
> >>>
>  On Dec 21, 2016, at 11:17 AM, Barak Korren   > wrote:
> 
>  The test for running VMs had been failing since yesterday.
> 
>  The patch merged before the failures started was:
>  https://gerrit.ovirt.org/#/c/68826/ 
>  
> >>>
> >>>
> >>>
> >>>
> 
>  The error we`re seeing is a time-out (after two minutes) while 
>  running
>  this API call:
> 
>  api.vms.get(VM0_NAME).status.state == ‘up'
> >>>
> >>>
> >>> This is a REST API call, the patch above is Frontend. So this is 
> >>> unrelated.
> >>>
> >>> However on Host 0 I can see this:
> >>>
> >>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
> >>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process 
> >>> failed (vm:615) Traceback (most recent call last): File 
> >>> "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm 
> >>> self._run() File "/usr/share/vdsm/virt/vm.py", line 1991, in _run 
> >>> self._connection.createXML(domxml, flags), File 
> >>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> >>> 123, in wrapper ret = f(*args, **kwargs) File 
> >>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in 
> >>> wrapper return func(inst, *args, **kwargs) File 
> >>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
> >>> createXML if ret is None:raise libvirtError('virDomainCreateXML() 
> >>> failed', conn=self) libvirtError: internal error: process exited 
> >>> while connecting to monitor: 2016-12-20T21:54:43.044971Z qemu-kvm: 
> >>> warning: CPU(s) not present in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 
> >>> 11 12 13 14 15 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All 
> >>> CPU(s) up to maxcpus should be described in NUMA config 
> >>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
> >>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
> >>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port 
> >>> "1" (high speed) 
> >
> >
> > it is likely related to the recent USB patches 
> > investigating
> 
> 
>  hm, there are multiple problems (features/bugs depending on prefferred 
>  point of view:)
>  but there is an easy “fix” taking care of this particular problem, so we 
>  can start with that and figure out the proper approach later
>  arik will push that and merge it soon, likely today
> >>>
> >>>
> >>> Thanks - if there is a quicker way to resolve this by reverting, I think 
> >>> it's a better option.
> >>
> >>
> >> I really need to talk you out of this approach:-)
> >> It does sound tempting and 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Eyal Edri
Not as easy at it sounds,   the current flow of ost is much more
complicated than just build artifacts. I'm not saving we won't do it,  but
it won't be ready tomorrow and we designed lago exactly for being
independent of any ci system so anyone can run it on their laptop.
 So maybe using Jenkins might be simpler but people shouldn't skip
verification just because such a job doesn't exist yet.

On Dec 21, 2016 9:02 PM, "Oved Ourfali"  wrote:

> Why not run it via Jenkins for patches?
> Like, if you add a comment saying "run: ost" it will run it?
> It do it automatically based on another thing?
>
> On Dec 21, 2016 17:42, "Eyal Edri"  wrote:
> >
> >
> >
> > On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
> >>
> >>
> >>> On 21 Dec 2016, at 16:25, Yaniv Kaul  wrote:
> >>>
> >>>
> >>>
> >>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
> 
> 
> > On 21 Dec 2016, at 14:56, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
> >
> >
> >> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
> >>
> >>
> >>
> >> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra <
> vfeen...@redhat.com> wrote:
> >>>
> >>>
>  On Dec 21, 2016, at 11:17 AM, Barak Korren 
> wrote:
> 
>  The test for running VMs had been failing since yesterday.
> 
>  The patch merged before the failures started was:
>  https://gerrit.ovirt.org/#/c/68826/
> >>>
> >>>
> >>>
> >>>
> 
>  The error we`re seeing is a time-out (after two minutes) while
> running
>  this API call:
> 
>  api.vms.get(VM0_NAME).status.state == ‘up'
> >>>
> >>>
> >>> This is a REST API call, the patch above is Frontend. So this is
> unrelated.
> >>>
> >>> However on Host 0 I can see this:
> >>>
> >>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm]
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed
> (vm:615) Traceback (most recent call last): File
> "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm self._run()
> File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
> self._connection.createXML(domxml, flags), File "/usr/lib/python2.7/site-
> packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args,
> **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line
> 941, in wrapper return func(inst, *args, **kwargs) File
> "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed',
> conn=self) libvirtError: internal error: process exited while connecting to
> monitor: 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present
> in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus
> should be described in NUMA config 2016-12-20T21:54:43.101886Z qemu-kvm:
> -device usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch
> trying to attach usb device "QEMU USB CCID" (full speed) to bus "usb.0",
> port "1" (high speed)
> >
> >
> > it is likely related to the recent USB patches
> > investigating
> 
> 
>  hm, there are multiple problems (features/bugs depending on
> prefferred point of view:)
>  but there is an easy “fix” taking care of this particular problem, so
> we can start with that and figure out the proper approach later
>  arik will push that and merge it soon, likely today
> >>>
> >>>
> >>> Thanks - if there is a quicker way to resolve this by reverting, I
> think it's a better option.
> >>
> >>
> >> I really need to talk you out of this approach:-)
> >> It does sound tempting and logical, but with our development model of
> large patch series combined with late detection it really is quite risky.
> Here it wouldn’t help much…and figuring out the right revert patch is more
> complicated then fixing it.
> >
> >
> > Can we start asking developers run OST before they merge so it will be
> early detection and not late detection?
> > We have video sessions on how to use OST, so it shouldn't be any issues
> in running it on a patch.
> >
> >>
> >> I believe the best is to identify it early and notify the maintainer
> who merged that patch ASAP, as that person is in the best position to asses
> if revert is safe or if there is a simple follow up patch he can push right
> away
> >>
> >> We can surely improve on reporting, so Barak, how/why did you point to
> that particular patch in your email? It should start failing on
> 16c2ec236184b3152f1df8e874b43115f78d0989 (CommitDate: Fri Dec 16 01:56:07
> 2016 -0500)
> >> Even though it may be that it was hidden because of
> c46f653a7846c3c2a76507b8dcf5bc0391ec5709 (CommitDate: Mon Dec 19 15:16:40
> 2016 -0500)
> >>
> >> (fix is 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Oved Ourfali
Why not run it via Jenkins for patches?
Like, if you add a comment saying "run: ost" it will run it?
It do it automatically based on another thing?

On Dec 21, 2016 17:42, "Eyal Edri"  wrote:
>
>
>
> On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>>
>>
>>> On 21 Dec 2016, at 16:25, Yaniv Kaul  wrote:
>>>
>>>
>>>
>>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:


> On 21 Dec 2016, at 14:56, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
>
>> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
>>
>>
>>
>> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra <
vfeen...@redhat.com> wrote:
>>>
>>>
 On Dec 21, 2016, at 11:17 AM, Barak Korren 
wrote:

 The test for running VMs had been failing since yesterday.

 The patch merged before the failures started was:
 https://gerrit.ovirt.org/#/c/68826/
>>>
>>>
>>>
>>>

 The error we`re seeing is a time-out (after two minutes) while
running
 this API call:

 api.vms.get(VM0_NAME).status.state == ‘up'
>>>
>>>
>>> This is a REST API call, the patch above is Frontend. So this is
unrelated.
>>>
>>> However on Host 0 I can see this:
>>>
>>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm]
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed
(vm:615) Traceback (most recent call last): File
"/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm self._run()
File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
self._connection.createXML(domxml, flags), File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in
wrapper ret = f(*args, **kwargs) File
"/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs) File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML if
ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: process exited while connecting to monitor:
2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2016-12-20T21:54:43.045164Z
qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA
config 2016-12-20T21:54:43.101886Z qemu-kvm: -device
usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to
attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1"
(high speed)
>
>
> it is likely related to the recent USB patches
> investigating


 hm, there are multiple problems (features/bugs depending on prefferred
point of view:)
 but there is an easy “fix” taking care of this particular problem, so
we can start with that and figure out the proper approach later
 arik will push that and merge it soon, likely today
>>>
>>>
>>> Thanks - if there is a quicker way to resolve this by reverting, I
think it's a better option.
>>
>>
>> I really need to talk you out of this approach:-)
>> It does sound tempting and logical, but with our development model of
large patch series combined with late detection it really is quite risky.
Here it wouldn’t help much…and figuring out the right revert patch is more
complicated then fixing it.
>
>
> Can we start asking developers run OST before they merge so it will be
early detection and not late detection?
> We have video sessions on how to use OST, so it shouldn't be any issues
in running it on a patch.
>
>>
>> I believe the best is to identify it early and notify the maintainer who
merged that patch ASAP, as that person is in the best position to asses if
revert is safe or if there is a simple follow up patch he can push right
away
>>
>> We can surely improve on reporting, so Barak, how/why did you point to
that particular patch in your email? It should start failing on
16c2ec236184b3152f1df8e874b43115f78d0989 (CommitDate: Fri Dec 16 01:56:07
2016 -0500)
>> Even though it may be that it was hidden because
of c46f653a7846c3c2a76507b8dcf5bc0391ec5709 (CommitDate: Mon Dec 19
15:16:40 2016 -0500)
>>
>> (fix is ready, waiting on CI now)
>>
>> Thanks,
>> michal
>>
>>> Y.
>>>


>>> 2016-12-20 16:54:43,550 INFO (vm/d299ab29) [virt.vm]
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down:
internal error: process exited while connecting to monitor:
2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 2016-12-20T21:54:43.045164Z
qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA
config 2016-12-20T21:54:43.101886Z qemu-kvm: -device
usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to
attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1"
(high speed) (code=1) 

Re: [ovirt-devel] [RFE] treat local NFS storage as localfs

2016-12-21 Thread Michal Skrivanek
> On 21 Dec 2016, at 16:26, Martin Sivak  wrote:
>
> Hi,
>
>> Hope this get's in. This seems less overhead than a complete
>> hyperconverged gluster setup.
>
> But NFS still is a single point of failure. Hyperconverged is supposed
> to address that.
>
>>> In order to improve performance, disk I/O bound VMs can be pinned to
>>> a host with local storage. However there still is a performance
>>> drawback of NFS layers. Treating a local NFS storage as a local storage
>>> improves performance for VMs pinned to host.
>
> So VMs on one host will get better IO performance and the others will
> still use NFS as they do now.
>
> It is an interesting idea, I am just not sure if having poor-man's
> hyperconverged setup with all the drawbacks of NFS is worth it.
> Imagine for example what happens when that storage provider host needs
> to be fenced or put into maintenance. The whole cluster would go down
> (all VMs would lose storage connection, not just the VMs from the
> affected host).
>
> I will let someone from the storage team to respond to this, but I do
> not think that trading performance (each host has its own local
> storage) and resilience (well, at least one failing host does not
> affect the others) for migrations is a good deal.

If disk performance is critical then there is an option to use direct
access on local host using either PCI passthrough of a local storage
controller or SCSI passthrough of LUNs.

>
> --
> Martin Sivak
> SLA / oVirt
>
>> On Wed, Dec 21, 2016 at 2:18 PM, Sven Kieske  wrote:
>>> On 21/12/16 11:44, Pavel Gashev wrote:
>>> Hello,
>>>
>>> I'd like to introduce a RFE that allows to use a local storage in multi
>>> server environments https://bugzilla.redhat.com/show_bug.cgi?id=1406412
>>>
>>> Most servers have a local storage. Some servers have very reliable
>>> storages with hardware RAID controllers and battery units.
>>>
>>> Example user cases:
>>> https://www.mail-archive.com/users@ovirt.org/msg36719.html
>>> https://www.mail-archive.com/users@ovirt.org/msg36772.html
>>>
>>> The best way to use local storage in multi server "shared" datacenters
>>> is exporting it over NFS. Using NFS allows to move disks and VMs among
>>> servers.
>>>
>>> In order to improve performance, disk I/O bound VMs can be pinned to
>>> a host with local storage. However there still is a performance
>>> drawback of NFS layers. Treating a local NFS storage as a local storage
>>> improves performance for VMs pinned to host.
>>>
>>> Currently setting up of NFS exports is out of scope of oVirt. However
>>> this would be a way to get rid of "Local/Shared" storage types of
>>> datacenter. So that all storages are shared, but local storages are
>>> used as local.
>>>
>>> Any questions/comments are welcome.
>>>
>>> Specifically I'd like to request for comment on potential data
>>> integrity issues during online VM or disk migration between NFS and
>>> localfs.
>>>
>>
>> Just let me say that I really like this as an end user.
>>
>> Hope this get's in. This seems less overhead than a complete
>> hyperconverged gluster setup.
>>
>>
>> --
>> Mit freundlichen Grüßen / Regards
>>
>> Sven Kieske
>>
>> Systemadministrator
>> Mittwald CM Service GmbH & Co. KG
>> Königsberger Straße 6
>> 32339 Espelkamp
>> T: +495772 293100
>> F: +495772 29
>> https://www.mittwald.de
>> Geschäftsführer: Robert Meyer
>> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
>> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Fwd: Fedora 23 End Of Life

2016-12-21 Thread Sandro Bonazzola
FYI.
-- Messaggio inoltrato --
Da: "Mohan Boddu" 
Data: 21/Dic/2016 04:05
Oggetto: Fedora 23 End Of Life
A: , <
test-annou...@lists.fedoraproject.org>, <
devel-annou...@lists.fedoraproject.org>
Cc:

As of the 20th of December 2016, Fedora 23 has reached its end of life
for updates and support. No further updates, including security
updates, will be available for Fedora 23. A previous reminder was sent
on 28th of November 2016 [0]. Fedora 24 will continue to receive
updates until approximately one month after the release of Fedora 26.
The maintenance schedule of Fedora releases is documented on the
Fedora Project wiki [1]. The Fedora Project wiki also contains
instructions [2] on how to upgrade from a previous release of Fedora
to a version receiving updates.

Mohan Boddu.

[0]https://lists.fedoraproject.org/archives/list/devel@lists.
fedoraproject.org/thread/HLHKRTIB33EDZXP624GHF2OZLHWAGKSJ/#
Q5O44X4BEBOYEKAEVLSXVI44DSNVHBYG
[1]https://fedoraproject.org/wiki/Fedora_Release_Life_
Cycle#Maintenance_Schedule
[2]https://fedoraproject.org/wiki/Upgrading?rd=DistributionUpgrades
___
devel-announce mailing list -- devel-annou...@lists.fedoraproject.org
To unsubscribe send an email to devel-announce-le...@lists.fedoraproject.org
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] test mail

2016-12-21 Thread Evgheni Dereveanchin
Hi, this is a test message to check that archiving 
is back to normal as part of OVIRT-949 [1]

Please ignore.

Regards, 
Evgheni Dereveanchin 

[1] https://ovirt-jira.atlassian.net/browse/OVIRT-949
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Michal Skrivanek

> On 21 Dec 2016, at 16:41, Eyal Edri  wrote:
> 
> 
> 
> On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek 
> > wrote:
> 
>> On 21 Dec 2016, at 16:25, Yaniv Kaul > > wrote:
>> 
>> 
>> 
>> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek 
>> > wrote:
>> 
>>> On 21 Dec 2016, at 14:56, Michal Skrivanek >> > wrote:
>>> 
>>> 
 On 21 Dec 2016, at 12:19, Eyal Edri > wrote:
 
 
 
 On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra > wrote:
 
> On Dec 21, 2016, at 11:17 AM, Barak Korren  > wrote:
> 
> The test for running VMs had been failing since yesterday.
> 
> The patch merged before the failures started was:
> https://gerrit.ovirt.org/#/c/68826/ 
 
 
 
> 
> The error we`re seeing is a time-out (after two minutes) while running
> this API call:
> 
> api.vms.get(VM0_NAME).status.state == ‘up'
 
 This is a REST API call, the patch above is Frontend. So this is unrelated.
 
 However on Host 0 I can see this:
 
 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
 (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
 (vm:615)
 Traceback (most recent call last):
   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
 self._run()
   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
 self._connection.createXML(domxml, flags),
   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
 123, in wrapper
 ret = f(*args, **kwargs)
   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in 
 wrapper
 return func(inst, *args, **kwargs)
   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
 createXML
 if ret is None:raise libvirtError('virDomainCreateXML() failed', 
 conn=self)
 libvirtError: internal error: process exited while connecting to monitor: 
 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
 NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
 should be described in NUMA config
 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
 usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
 attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
 (high speed)
>>> 
>>> it is likely related to the recent USB patches 
>>> investigating
>> 
>> hm, there are multiple problems (features/bugs depending on prefferred point 
>> of view:)
>> but there is an easy “fix” taking care of this particular problem, so we can 
>> start with that and figure out the proper approach later
>> arik will push that and merge it soon, likely today
>> 
>> Thanks - if there is a quicker way to resolve this by reverting, I think 
>> it's a better option.
> 
> I really need to talk you out of this approach:-)
> It does sound tempting and logical, but with our development model of large 
> patch series combined with late detection it really is quite risky. Here it 
> wouldn’t help much…and figuring out the right revert patch is more 
> complicated then fixing it.
> 
> Can we start asking developers run OST before they merge so it will be early 
> detection and not late detection? 

> We have video sessions on how to use OST, so it shouldn't be any issues in 
> running it on a patch.

why do you think i’m so annoying on those dependency issues:-)
it is not currently possible for most people. Different reasons.
plus all the various recent breakages in past 2 weeks made it even worse:/

>  
> I believe the best is to identify it early and notify the maintainer who 
> merged that patch ASAP, as that person is in the best position to asses if 
> revert is safe or if there is a simple follow up patch he can push right away
> 
> We can surely improve on reporting, so Barak, how/why did you point to that 
> particular patch in your email? It should start failing on 
> 16c2ec236184b3152f1df8e874b43115f78d0989 (CommitDate: Fri Dec 16 01:56:07 
> 2016 -0500)
> Even though it may be that it was hidden because of 
> c46f653a7846c3c2a76507b8dcf5bc0391ec5709 (CommitDate: Mon Dec 19 15:16:40 
> 2016 -0500)
> 
> (fix is ready, waiting on CI now)
> 
> Thanks,
> michal
> 
>> Y.
>>  
>> 
 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
 (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: 
 internal error: process exited while connecting to monitor: 
 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Eyal Edri
On Wed, Dec 21, 2016 at 5:36 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 21 Dec 2016, at 16:25, Yaniv Kaul  wrote:
>
>
>
> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> On 21 Dec 2016, at 14:56, Michal Skrivanek 
>> wrote:
>>
>>
>> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
>>
>>
>>
>> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra 
>> wrote:
>>
>>>
>>> On Dec 21, 2016, at 11:17 AM, Barak Korren  wrote:
>>>
>>> The test for running VMs had been failing since yesterday.
>>>
>>> The patch merged before the failures started was:
>>> https://gerrit.ovirt.org/#/c/68826/
>>>
>>>
>>>
>>>
>>>
>>> The error we`re seeing is a time-out (after two minutes) while running
>>> this API call:
>>>
>>> api.vms.get(VM0_NAME).status.state == ‘up'
>>>
>>>
>>> This is a REST API call, the patch above is Frontend. So this is
>>> unrelated.
>>>
>>> However on Host 0 I can see this:
>>>
>>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
>>> (vm:615)
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
>>> self._run()
>>>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>>> self._connection.createXML(domxml, flags),
>>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
>>> 123, in wrapper
>>> ret = f(*args, **kwargs)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in 
>>> wrapper
>>> return func(inst, *args, **kwargs)
>>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
>>> createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
>>> conn=self)
>>> libvirtError: internal error: process exited while connecting to monitor: 
>>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>>> should be described in NUMA config
>>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
>>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
>>> (high speed)
>>>
>>>
>> it is likely related to the recent USB patches
>> investigating
>>
>>
>> hm, there are multiple problems (features/bugs depending on prefferred
>> point of view:)
>> but there is an easy “fix” taking care of this particular problem, so we
>> can start with that and figure out the proper approach later
>> arik will push that and merge it soon, likely today
>>
>
> Thanks - if there is a quicker way to resolve this by reverting, I think
> it's a better option.
>
>
> I really need to talk you out of this approach:-)
> It does sound tempting and logical, but with our development model of
> large patch series combined with late detection it really is quite risky.
> Here it wouldn’t help much…and figuring out the right revert patch is more
> complicated then fixing it.
>

Can we start asking developers run OST before they merge so it will be
early detection and not late detection?
We have video sessions on how to use OST, so it shouldn't be any issues in
running it on a patch.


> I believe the best is to identify it early and notify the maintainer who
> merged that patch ASAP, as that person is in the best position to asses if
> revert is safe or if there is a simple follow up patch he can push right
> away
>
> We can surely improve on reporting, so Barak, how/why did you point to
> that particular patch in your email? It should start failing on
> 16c2ec236184b3152f1df8e874b43115f78d0989 (CommitDate: Fri Dec 16 01:56:07
> 2016 -0500)
> Even though it may be that it was hidden because of
> c46f653a7846c3c2a76507b8dcf5bc0391ec5709 (CommitDate: Mon Dec 19 15:16:40
> 2016 -0500)
>
> (fix is ready, waiting on CI now)
>
> Thanks,
> michal
>
> Y.
>
>
>>
>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: 
>> internal error: process exited while connecting to monitor: 
>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>>> should be described in NUMA config
>>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
>>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
>>> (high speed) (code=1) (vm:1197)
>>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
>>> (guestagent:430)
>>>
>>>
>>>
>>> And on The engine loads of these:

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Michal Skrivanek

> On 21 Dec 2016, at 16:25, Yaniv Kaul  wrote:
> 
> 
> 
> On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek 
> > wrote:
> 
>> On 21 Dec 2016, at 14:56, Michal Skrivanek > > wrote:
>> 
>> 
>>> On 21 Dec 2016, at 12:19, Eyal Edri >> > wrote:
>>> 
>>> 
>>> 
>>> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra >> > wrote:
>>> 
 On Dec 21, 2016, at 11:17 AM, Barak Korren > wrote:
 
 The test for running VMs had been failing since yesterday.
 
 The patch merged before the failures started was:
 https://gerrit.ovirt.org/#/c/68826/ 
>>> 
>>> 
>>> 
 
 The error we`re seeing is a time-out (after two minutes) while running
 this API call:
 
 api.vms.get(VM0_NAME).status.state == ‘up'
>>> 
>>> This is a REST API call, the patch above is Frontend. So this is unrelated.
>>> 
>>> However on Host 0 I can see this:
>>> 
>>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
>>> (vm:615)
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
>>> self._run()
>>>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>>> self._connection.createXML(domxml, flags),
>>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
>>> 123, in wrapper
>>> ret = f(*args, **kwargs)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in 
>>> wrapper
>>> return func(inst, *args, **kwargs)
>>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
>>> createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
>>> conn=self)
>>> libvirtError: internal error: process exited while connecting to monitor: 
>>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>>> should be described in NUMA config
>>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
>>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
>>> (high speed)
>> 
>> it is likely related to the recent USB patches 
>> investigating
> 
> hm, there are multiple problems (features/bugs depending on prefferred point 
> of view:)
> but there is an easy “fix” taking care of this particular problem, so we can 
> start with that and figure out the proper approach later
> arik will push that and merge it soon, likely today
> 
> Thanks - if there is a quicker way to resolve this by reverting, I think it's 
> a better option.

I really need to talk you out of this approach:-)
It does sound tempting and logical, but with our development model of large 
patch series combined with late detection it really is quite risky. Here it 
wouldn’t help much…and figuring out the right revert patch is more complicated 
then fixing it.
I believe the best is to identify it early and notify the maintainer who merged 
that patch ASAP, as that person is in the best position to asses if revert is 
safe or if there is a simple follow up patch he can push right away

We can surely improve on reporting, so Barak, how/why did you point to that 
particular patch in your email? It should start failing on 
16c2ec236184b3152f1df8e874b43115f78d0989 (CommitDate: Fri Dec 16 01:56:07 2016 
-0500)
Even though it may be that it was hidden because of 
c46f653a7846c3c2a76507b8dcf5bc0391ec5709 (CommitDate: Mon Dec 19 15:16:40 2016 
-0500)

(fix is ready, waiting on CI now)

Thanks,
michal

> Y.
>  
> 
>>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: 
>>> internal error: process exited while connecting to monitor: 
>>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>>> should be described in NUMA config
>>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
>>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
>>> (high speed) (code=1) (vm:1197)
>>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
>>> (guestagent:430)
>>> 
>>> 
>>> And on The engine loads of these:
>>> 
>>> 2016-12-20 16:53:57,844-05 ERROR 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Michal Skrivanek
oh, and Barak, thanks a lot for a very nice descriptive report with all 
relevant links
That greatly improves the chance that random people like me takes a look and 
increases the chance that the problem is identified

> On 21 Dec 2016, at 16:19, Michal Skrivanek  
> wrote:
> 
> 
>> On 21 Dec 2016, at 14:56, Michal Skrivanek > > wrote:
>> 
>> 
>>> On 21 Dec 2016, at 12:19, Eyal Edri >> > wrote:
>>> 
>>> 
>>> 
>>> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra >> > wrote:
>>> 
 On Dec 21, 2016, at 11:17 AM, Barak Korren > wrote:
 
 The test for running VMs had been failing since yesterday.
 
 The patch merged before the failures started was:
 https://gerrit.ovirt.org/#/c/68826/ 
>>> 
>>> 
>>> 
 
 The error we`re seeing is a time-out (after two minutes) while running
 this API call:
 
 api.vms.get(VM0_NAME).status.state == ‘up'
>>> 
>>> This is a REST API call, the patch above is Frontend. So this is unrelated.
>>> 
>>> However on Host 0 I can see this:
>>> 
>>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
>>> (vm:615)
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
>>> self._run()
>>>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>>> self._connection.createXML(domxml, flags),
>>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
>>> 123, in wrapper
>>> ret = f(*args, **kwargs)
>>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in 
>>> wrapper
>>> return func(inst, *args, **kwargs)
>>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
>>> createXML
>>> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
>>> conn=self)
>>> libvirtError: internal error: process exited while connecting to monitor: 
>>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>>> should be described in NUMA config
>>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
>>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
>>> (high speed)
>> 
>> it is likely related to the recent USB patches 
>> investigating
> 
> hm, there are multiple problems (features/bugs depending on prefferred point 
> of view:)
> but there is an easy “fix” taking care of this particular problem, so we can 
> start with that and figure out the proper approach later
> arik will push that and merge it soon, likely today
> 
>>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: 
>>> internal error: process exited while connecting to monitor: 
>>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>>> should be described in NUMA config
>>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to 
>>> attach usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" 
>>> (high speed) (code=1) (vm:1197)
>>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
>>> (guestagent:430)
>>> 
>>> 
>>> And on The engine loads of these:
>>> 
>>> 2016-12-20 16:53:57,844-05 ERROR 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default 
>>> task-17) [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command 
>>> 'PollVDSCommand(HostName = lago-basic-suite-4-1-host0, 
>>> VdsIdVDSCommandParametersBase:{runAsync='true', 
>>> hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed: 
>>> VDSGenericException: VDSNetworkException: Timeout during rpc call
>>> 2016-12-20 16:53:57,849-05 DEBUG 
>>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp 
>>> Reactor) [7971dfb4] MESSAGE
>>> content-length:80
>>> destination:jms.topic.vdsm_responses
>>> content-type:application/json
>>> subscription:5b6494d5-d5a0-4771-941c-a8be70f72450
>>> 
>>> {"jsonrpc": "2.0", "id": "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": 
>>> true}�
>>> 2016-12-20 16:53:57,850-05 DEBUG 
>>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
>>> Message received: {"jsonrpc": "2.0", "id": 
>>> "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}
>>> 2016-12-20 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Yaniv Kaul
On Wed, Dec 21, 2016 at 5:19 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 21 Dec 2016, at 14:56, Michal Skrivanek 
> wrote:
>
>
> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
>
>
>
> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra 
> wrote:
>
>>
>> On Dec 21, 2016, at 11:17 AM, Barak Korren  wrote:
>>
>> The test for running VMs had been failing since yesterday.
>>
>> The patch merged before the failures started was:
>> https://gerrit.ovirt.org/#/c/68826/
>>
>>
>>
>>
>>
>> The error we`re seeing is a time-out (after two minutes) while running
>> this API call:
>>
>> api.vms.get(VM0_NAME).status.state == ‘up'
>>
>>
>> This is a REST API call, the patch above is Frontend. So this is
>> unrelated.
>>
>> However on Host 0 I can see this:
>>
>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
>> (vm:615)
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
>> 123, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
>> return func(inst, *args, **kwargs)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
>> conn=self)
>> libvirtError: internal error: process exited while connecting to monitor: 
>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>> should be described in NUMA config
>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
>> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed)
>>
>>
> it is likely related to the recent USB patches
> investigating
>
>
> hm, there are multiple problems (features/bugs depending on prefferred
> point of view:)
> but there is an easy “fix” taking care of this particular problem, so we
> can start with that and figure out the proper approach later
> arik will push that and merge it soon, likely today
>

Thanks - if there is a quicker way to resolve this by reverting, I think
it's a better option.
Y.


>
> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: internal 
> error: process exited while connecting to monitor: 
> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>> should be described in NUMA config
>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
>> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high 
>> speed) (code=1) (vm:1197)
>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
>> (guestagent:430)
>>
>>
>>
>> And on The engine loads of these:
>>
>> 2016-12-20 16:53:57,844-05 ERROR 
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
>> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command 'PollVDSCommand(HostName = 
>> lago-basic-suite-4-1-host0, VdsIdVDSCommandParametersBase:{runAsync='true', 
>> hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed: 
>> VDSGenericException: VDSNetworkException: Timeout during rpc call
>> 2016-12-20 16:53:57,849-05 DEBUG 
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp 
>> Reactor) [7971dfb4] MESSAGE
>> content-length:80
>> destination:jms.topic.vdsm_responses
>> content-type:application/json
>> subscription:5b6494d5-d5a0-4771-941c-a8be70f72450
>>
>> {"jsonrpc": "2.0", "id": "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": 
>> true}�
>> 2016-12-20 16:53:57,850-05 DEBUG 
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
>> Message received: {"jsonrpc": "2.0", "id": 
>> "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}
>> 2016-12-20 16:53:57,850-05 ERROR 
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not able 
>> to update response for "3c95fdb0-5b77-4927-9f6e-adc7395c122d"
>> 2016-12-20 16:53:57,844-05 DEBUG 
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
>> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Exception: 
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
>> VDSGenericException: 

Re: [ovirt-devel] [RFE] treat local NFS storage as localfs

2016-12-21 Thread Martin Sivak
Hi,

> Hope this get's in. This seems less overhead than a complete
> hyperconverged gluster setup.

But NFS still is a single point of failure. Hyperconverged is supposed
to address that.

>> In order to improve performance, disk I/O bound VMs can be pinned to
>> a host with local storage. However there still is a performance
>> drawback of NFS layers. Treating a local NFS storage as a local storage
>> improves performance for VMs pinned to host.

So VMs on one host will get better IO performance and the others will
still use NFS as they do now.

It is an interesting idea, I am just not sure if having poor-man's
hyperconverged setup with all the drawbacks of NFS is worth it.
Imagine for example what happens when that storage provider host needs
to be fenced or put into maintenance. The whole cluster would go down
(all VMs would lose storage connection, not just the VMs from the
affected host).

I will let someone from the storage team to respond to this, but I do
not think that trading performance (each host has its own local
storage) and resilience (well, at least one failing host does not
affect the others) for migrations is a good deal.

--
Martin Sivak
SLA / oVirt

On Wed, Dec 21, 2016 at 2:18 PM, Sven Kieske  wrote:
> On 21/12/16 11:44, Pavel Gashev wrote:
>> Hello,
>>
>> I'd like to introduce a RFE that allows to use a local storage in multi
>> server environments https://bugzilla.redhat.com/show_bug.cgi?id=1406412
>>
>> Most servers have a local storage. Some servers have very reliable
>> storages with hardware RAID controllers and battery units.
>>
>> Example user cases:
>> https://www.mail-archive.com/users@ovirt.org/msg36719.html
>> https://www.mail-archive.com/users@ovirt.org/msg36772.html
>>
>> The best way to use local storage in multi server "shared" datacenters
>> is exporting it over NFS. Using NFS allows to move disks and VMs among
>> servers.
>>
>> In order to improve performance, disk I/O bound VMs can be pinned to
>> a host with local storage. However there still is a performance
>> drawback of NFS layers. Treating a local NFS storage as a local storage
>> improves performance for VMs pinned to host.
>>
>> Currently setting up of NFS exports is out of scope of oVirt. However
>> this would be a way to get rid of "Local/Shared" storage types of
>> datacenter. So that all storages are shared, but local storages are
>> used as local.
>>
>> Any questions/comments are welcome.
>>
>> Specifically I'd like to request for comment on potential data
>> integrity issues during online VM or disk migration between NFS and
>> localfs.
>>
>
> Just let me say that I really like this as an end user.
>
> Hope this get's in. This seems less overhead than a complete
> hyperconverged gluster setup.
>
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +495772 293100
> F: +495772 29
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Michal Skrivanek

> On 21 Dec 2016, at 14:56, Michal Skrivanek  
> wrote:
> 
> 
>> On 21 Dec 2016, at 12:19, Eyal Edri > > wrote:
>> 
>> 
>> 
>> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra > > wrote:
>> 
>>> On Dec 21, 2016, at 11:17 AM, Barak Korren >> > wrote:
>>> 
>>> The test for running VMs had been failing since yesterday.
>>> 
>>> The patch merged before the failures started was:
>>> https://gerrit.ovirt.org/#/c/68826/ 
>> 
>> 
>> 
>>> 
>>> The error we`re seeing is a time-out (after two minutes) while running
>>> this API call:
>>> 
>>> api.vms.get(VM0_NAME).status.state == ‘up'
>> 
>> This is a REST API call, the patch above is Frontend. So this is unrelated.
>> 
>> However on Host 0 I can see this:
>> 
>> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
>> (vm:615)
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
>> 123, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
>> return func(inst, *args, **kwargs)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
>> conn=self)
>> libvirtError: internal error: process exited while connecting to monitor: 
>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>> should be described in NUMA config
>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
>> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed)
> 
> it is likely related to the recent USB patches 
> investigating

hm, there are multiple problems (features/bugs depending on prefferred point of 
view:)
but there is an easy “fix” taking care of this particular problem, so we can 
start with that and figure out the proper approach later
arik will push that and merge it soon, likely today

>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: 
>> internal error: process exited while connecting to monitor: 
>> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any 
>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
>> should be described in NUMA config
>> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
>> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
>> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high 
>> speed) (code=1) (vm:1197)
>> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
>> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
>> (guestagent:430)
>> 
>> 
>> And on The engine loads of these:
>> 
>> 2016-12-20 16:53:57,844-05 ERROR 
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
>> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command 'PollVDSCommand(HostName = 
>> lago-basic-suite-4-1-host0, VdsIdVDSCommandParametersBase:{runAsync='true', 
>> hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed: 
>> VDSGenericException: VDSNetworkException: Timeout during rpc call
>> 2016-12-20 16:53:57,849-05 DEBUG 
>> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp 
>> Reactor) [7971dfb4] MESSAGE
>> content-length:80
>> destination:jms.topic.vdsm_responses
>> content-type:application/json
>> subscription:5b6494d5-d5a0-4771-941c-a8be70f72450
>> 
>> {"jsonrpc": "2.0", "id": "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": 
>> true}�
>> 2016-12-20 16:53:57,850-05 DEBUG 
>> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
>> Message received: {"jsonrpc": "2.0", "id": 
>> "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}
>> 2016-12-20 16:53:57,850-05 ERROR 
>> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not able 
>> to update response for "3c95fdb0-5b77-4927-9f6e-adc7395c122d"
>> 2016-12-20 16:53:57,844-05 DEBUG 
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
>> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Exception: 
>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
>> VDSGenericException: VDSNetworkException: Timeout during rpc call
>>

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Michal Skrivanek

> On 21 Dec 2016, at 12:19, Eyal Edri  wrote:
> 
> 
> 
> On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra  > wrote:
> 
>> On Dec 21, 2016, at 11:17 AM, Barak Korren > > wrote:
>> 
>> The test for running VMs had been failing since yesterday.
>> 
>> The patch merged before the failures started was:
>> https://gerrit.ovirt.org/#/c/68826/ 
> 
> 
> 
>> 
>> The error we`re seeing is a time-out (after two minutes) while running
>> this API call:
>> 
>> api.vms.get(VM0_NAME).status.state == ‘up'
> 
> This is a REST API call, the patch above is Frontend. So this is unrelated.
> 
> However on Host 0 I can see this:
> 
> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
> (vm:615)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> 123, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
> conn=self)
> libvirtError: internal error: process exited while connecting to monitor: 
> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed)

it is likely related to the recent USB patches 
investigating
> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: internal 
> error: process exited while connecting to monitor: 
> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed) 
> (code=1) (vm:1197)
> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
> (guestagent:430)
> 
> 
> And on The engine loads of these:
> 
> 2016-12-20 16:53:57,844-05 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command 'PollVDSCommand(HostName = 
> lago-basic-suite-4-1-host0, VdsIdVDSCommandParametersBase:{runAsync='true', 
> hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed: 
> VDSGenericException: VDSNetworkException: Timeout during rpc call
> 2016-12-20 16:53:57,849-05 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp 
> Reactor) [7971dfb4] MESSAGE
> content-length:80
> destination:jms.topic.vdsm_responses
> content-type:application/json
> subscription:5b6494d5-d5a0-4771-941c-a8be70f72450
> 
> {"jsonrpc": "2.0", "id": "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": 
> true}�
> 2016-12-20 16:53:57,850-05 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
> Message received: {"jsonrpc": "2.0", "id": 
> "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}
> 2016-12-20 16:53:57,850-05 ERROR 
> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not able to 
> update response for "3c95fdb0-5b77-4927-9f6e-adc7395c122d"
> 2016-12-20 16:53:57,844-05 DEBUG 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Exception: 
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
> VDSGenericException: VDSNetworkException: Timeout during rpc call
>   at 
> org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
>  [vdsbroker.jar:]
>   at 
> org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.getValue(HostSetupNetworkPoller.java:56)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.poll(HostSetupNetworkPoller.java:41)
>  [bll.jar:]
>   at 
> 

Re: [ovirt-devel] [RFE] treat local NFS storage as localfs

2016-12-21 Thread Sven Kieske
On 21/12/16 11:44, Pavel Gashev wrote:
> Hello,
> 
> I'd like to introduce a RFE that allows to use a local storage in multi
> server environments https://bugzilla.redhat.com/show_bug.cgi?id=1406412
> 
> Most servers have a local storage. Some servers have very reliable
> storages with hardware RAID controllers and battery units.
> 
> Example user cases:
> https://www.mail-archive.com/users@ovirt.org/msg36719.html
> https://www.mail-archive.com/users@ovirt.org/msg36772.html
> 
> The best way to use local storage in multi server "shared" datacenters
> is exporting it over NFS. Using NFS allows to move disks and VMs among
> servers.
> 
> In order to improve performance, disk I/O bound VMs can be pinned to 
> a host with local storage. However there still is a performance
> drawback of NFS layers. Treating a local NFS storage as a local storage
> improves performance for VMs pinned to host.
> 
> Currently setting up of NFS exports is out of scope of oVirt. However
> this would be a way to get rid of "Local/Shared" storage types of
> datacenter. So that all storages are shared, but local storages are
> used as local.
> 
> Any questions/comments are welcome.
> 
> Specifically I'd like to request for comment on potential data
> integrity issues during online VM or disk migration between NFS and
> localfs. 
> 

Just let me say that I really like this as an end user.

Hope this get's in. This seems less overhead than a complete
hyperconverged gluster setup.


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +495772 293100
F: +495772 29
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen



signature.asc
Description: OpenPGP digital signature
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Eyal Edri
On Wed, Dec 21, 2016 at 12:56 PM, Vinzenz Feenstra 
wrote:

>
> On Dec 21, 2016, at 11:17 AM, Barak Korren  wrote:
>
> The test for running VMs had been failing since yesterday.
>
> The patch merged before the failures started was:
> https://gerrit.ovirt.org/#/c/68826/
>
>
>
>
>
> The error we`re seeing is a time-out (after two minutes) while running
> this API call:
>
> api.vms.get(VM0_NAME).status.state == ‘up'
>
>
> This is a REST API call, the patch above is Frontend. So this is unrelated.
>
> However on Host 0 I can see this:
>
> 2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
> (vm:615)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
> self._run()
>   File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
> self._connection.createXML(domxml, flags),
>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 
> 123, in wrapper
> ret = f(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
> return func(inst, *args, **kwargs)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in 
> createXML
> if ret is None:raise libvirtError('virDomainCreateXML() failed', 
> conn=self)
> libvirtError: internal error: process exited while connecting to monitor: 
> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed)
> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: internal 
> error: process exited while connecting to monitor: 
> 2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
> 2016-12-20T21:54:43.101886Z qemu-kvm: -device 
> usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
> usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed) 
> (code=1) (vm:1197)
> 2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
> (vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
> (guestagent:430)
>
>
>
> And on The engine loads of these:
>
> 2016-12-20 16:53:57,844-05 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command 'PollVDSCommand(HostName = 
> lago-basic-suite-4-1-host0, VdsIdVDSCommandParametersBase:{runAsync='true', 
> hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed: 
> VDSGenericException: VDSNetworkException: Timeout during rpc call
> 2016-12-20 16:53:57,849-05 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp 
> Reactor) [7971dfb4] MESSAGE
> content-length:80
> destination:jms.topic.vdsm_responses
> content-type:application/json
> subscription:5b6494d5-d5a0-4771-941c-a8be70f72450
>
> {"jsonrpc": "2.0", "id": "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": 
> true}�
> 2016-12-20 16:53:57,850-05 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
> Message received: {"jsonrpc": "2.0", "id": 
> "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}
> 2016-12-20 16:53:57,850-05 ERROR 
> [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] (ResponseWorker) [] Not able to 
> update response for "3c95fdb0-5b77-4927-9f6e-adc7395c122d"
> 2016-12-20 16:53:57,844-05 DEBUG 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
> [5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Exception: 
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
> VDSGenericException: VDSNetworkException: Timeout during rpc call
>   at 
> org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
>  [vdsbroker.jar:]
>   at 
> org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.getValue(HostSetupNetworkPoller.java:56)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.poll(HostSetupNetworkPoller.java:41)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand.invokeSetupNetworksCommand(HostSetupNetworksCommand.java:426)
>  [bll.jar:]
>   at 
> org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand.executeCommand(HostSetupNetworksCommand.java:287)
>  [bll.jar:]
>   at 
> 

Re: [ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Vinzenz Feenstra

> On Dec 21, 2016, at 11:17 AM, Barak Korren  wrote:
> 
> The test for running VMs had been failing since yesterday.
> 
> The patch merged before the failures started was:
> https://gerrit.ovirt.org/#/c/68826/ 



> 
> The error we`re seeing is a time-out (after two minutes) while running
> this API call:
> 
> api.vms.get(VM0_NAME).status.state == ‘up'

This is a REST API call, the patch above is Frontend. So this is unrelated.

However on Host 0 I can see this:

2016-12-20 16:54:43,544 ERROR (vm/d299ab29) [virt.vm] 
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') The vm start process failed 
(vm:615)
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 551, in _startUnderlyingVm
self._run()
  File "/usr/share/vdsm/virt/vm.py", line 1991, in _run
self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error: process exited while connecting to monitor: 
2016-12-20T21:54:43.044971Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config
2016-12-20T21:54:43.101886Z qemu-kvm: -device 
usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed)
2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') Changed state to Down: internal 
error: process exited while connecting to monitor: 2016-12-20T21:54:43.044971Z 
qemu-kvm: warning: CPU(s) not present in any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 
11 12 13 14 15
2016-12-20T21:54:43.045164Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config
2016-12-20T21:54:43.101886Z qemu-kvm: -device 
usb-ccid,id=ccid0,bus=usb.0,port=1: Warning: speed mismatch trying to attach 
usb device "QEMU USB CCID" (full speed) to bus "usb.0", port "1" (high speed) 
(code=1) (vm:1197)
2016-12-20 16:54:43,550 INFO  (vm/d299ab29) [virt.vm] 
(vmId='d299ab29-284a-435c-a50f-183a6e54def2') Stopping connection 
(guestagent:430)


And on The engine loads of these:

2016-12-20 16:53:57,844-05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
[5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Command 'PollVDSCommand(HostName = 
lago-basic-suite-4-1-host0, VdsIdVDSCommandParametersBase:{runAsync='true', 
hostId='994b5d79-605f-4415-94f2-02c79cfa246e'})' execution failed: 
VDSGenericException: VDSNetworkException: Timeout during rpc call
2016-12-20 16:53:57,849-05 DEBUG 
[org.ovirt.vdsm.jsonrpc.client.reactors.stomp.impl.Message] (SSL Stomp Reactor) 
[7971dfb4] MESSAGE
content-length:80
destination:jms.topic.vdsm_responses
content-type:application/json
subscription:5b6494d5-d5a0-4771-941c-a8be70f72450

{"jsonrpc": "2.0", "id": "3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": 
true}�
2016-12-20 16:53:57,850-05 DEBUG 
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] 
Message received: {"jsonrpc": "2.0", "id": 
"3c95fdb0-5b77-4927-9f6e-adc7395c122d", "result": true}
2016-12-20 16:53:57,850-05 ERROR [org.ovirt.vdsm.jsonrpc.client.JsonRpcClient] 
(ResponseWorker) [] Not able to update response for 
"3c95fdb0-5b77-4927-9f6e-adc7395c122d"
2016-12-20 16:53:57,844-05 DEBUG 
[org.ovirt.engine.core.vdsbroker.vdsbroker.PollVDSCommand] (default task-17) 
[5ecd5a55-2b7a-4dd6-b42b-cc49bbfb3962] Exception: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
VDSGenericException: VDSNetworkException: Timeout during rpc call
at 
org.ovirt.engine.core.vdsbroker.vdsbroker.FutureVDSCommand.get(FutureVDSCommand.java:73)
 [vdsbroker.jar:]
at 
org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.getValue(HostSetupNetworkPoller.java:56)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.network.host.HostSetupNetworkPoller.poll(HostSetupNetworkPoller.java:41)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand.invokeSetupNetworksCommand(HostSetupNetworksCommand.java:426)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand.executeCommand(HostSetupNetworksCommand.java:287)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1249)
 [bll.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1389)
 [bll.jar:]
at 

[ovirt-devel] [RFE] treat local NFS storage as localfs

2016-12-21 Thread Pavel Gashev
Hello,

I'd like to introduce a RFE that allows to use a local storage in multi
server environments https://bugzilla.redhat.com/show_bug.cgi?id=1406412

Most servers have a local storage. Some servers have very reliable
storages with hardware RAID controllers and battery units.

Example user cases:
https://www.mail-archive.com/users@ovirt.org/msg36719.html
https://www.mail-archive.com/users@ovirt.org/msg36772.html

The best way to use local storage in multi server "shared" datacenters
is exporting it over NFS. Using NFS allows to move disks and VMs among
servers.

In order to improve performance, disk I/O bound VMs can be pinned to 
a host with local storage. However there still is a performance
drawback of NFS layers. Treating a local NFS storage as a local storage
improves performance for VMs pinned to host.

Currently setting up of NFS exports is out of scope of oVirt. However
this would be a way to get rid of "Local/Shared" storage types of
datacenter. So that all storages are shared, but local storages are
used as local.

Any questions/comments are welcome.

Specifically I'd like to request for comment on potential data
integrity issues during online VM or disk migration between NFS and
localfs. 

Thank you
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] System tests for 4.1 currently failing to run VMs!

2016-12-21 Thread Barak Korren
The test for running VMs had been failing since yesterday.

The patch merged before the failures started was:
https://gerrit.ovirt.org/#/c/68826/

The error we`re seeing is a time-out (after two minutes) while running
this API call:

api.vms.get(VM0_NAME).status.state == 'up'

Full test code can be seen here:
https://gerrit.ovirt.org/gitweb?p=ovirt-system-tests.git;a=blob;f=basic-suite-4.1/test-scenarios/004_basic_sanity.py;hb=refs/heads/master#l291

Full test exception can be seen here:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/13/testReport/junit/%28root%29/004_basic_sanity/vm_run/

Further logs can be seen in Jenkins:
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_4.1/13/artifact/exported-artifacts/basic_suite_4.1.sh-el7/exported-artifacts/test_logs/basic-suite-4.1/post-004_basic_sanity.py/

-- 
Barak Korren
bkor...@redhat.com
RHCE, RHCi, RHV-DevOps Team
https://ifireball.wordpress.com/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] News from oVirt CI: Introducing 'build-on-demand'

2016-12-21 Thread Eyal Edri
FYI,

Following last announcement on the manual build from patch job [1], we got
some feedback and
requests from developers on ability to improve the flow of building
artifacts from a patch.

I'm happy to announce that after some coding, the infra team was able to
add a new feature
to the 'standard CI' framework, that will allow any oVirt project to build
rpms from any VERSION or OS DISTRO using a single comment in the patch.

Full details can be found on the new oVirt blog 'ci please build' [2], but
to give the TL;DR version here,
All you have to do is write '*ci please build*' on a comment and CI will
trigger a job for you with new RPMs (or tarballs).

The projects which already have this feature enabled are:

   - ovirt-engine
   - vdsm
   - vdsm-jsonrpc-java
   - ovirt-engine-dashboard

Adding new project is a single line of code in the project YAML file and
its fully described on the blog post [2], so feel free to add your project
as well.

So let the builds roll...

Happy Xmas!


[1] http://lists.phx.ovirt.org/pipermail/devel/2016-December/028967.html
[2] https://www.ovirt.org/blog/2016/12/ci-please-build/

-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ACTION REQUIRED] [URGENT] ovirt-4.1-snapshot repoclosure is failing due to ovirt-provider-ovn and vdsm

2016-12-21 Thread Eyal Edri
On Wed, Dec 21, 2016 at 10:55 AM, Sandro Bonazzola 
wrote:

>
>
> On Wed, Dec 21, 2016 at 9:51 AM, Dan Kenigsberg  wrote:
>
>> On Wed, Dec 21, 2016 at 10:17 AM, Sandro Bonazzola 
>> wrote:
>> > 00:00:31.874 Num Packages in Repos: 22534
>> > 00:00:31.875 package:
>> > ovirt-provider-ovn-1.0-1.20161219125609.git.el7.centos.noarch from
>> > check-custom-el7
>> > 00:00:31.876   unresolved deps:
>> > 00:00:31.876  python-openvswitch >= 0:2.6
>> > 00:00:31.876  openvswitch-ovn-central >= 0:2.6
>> > 00:00:31.876 package:
>> > ovirt-provider-ovn-driver-1.0-1.20161219125609.git.el7.centos.noarch
>> from
>>
>> It's good we have repoclosure, as it reminded us we cannot ship
>> ovirt-provider-ovn unless we build and ship a version of openvswitch
>> from from their master branch, at least until they ship ovs-2.7.
>>
>> Sandro, Marcin: can we do it? Can we supply our own build of
>> openvswitch, like we did for Marcin's blog?
>>
>> > check-custom-el7
>> > 00:00:31.876   unresolved deps:
>> > 00:00:31.876  python-openvswitch >= 0:2.6
>> > 00:00:31.876  openvswitch-ovn-host >= 0:2.6
>> > 00:00:31.877  openvswitch >= 0:2.6
>> > 00:00:31.877 package:
>> > vdsm-gluster-4.18.999-1162.gite9544ovirt-provider-ovn2e.el7.centos.noarch
>> from
>> > check-custom-el7
>> > 00:00:31.877   unresolved deps:
>> > 00:00:31.877  vdsm = 0:4.18.999-1162.gite95442e.el7.centos
>>
>> All of these seem like repoclosure false warning.
>>
>> After all, vdsm = 0:4.18.999-1162.gite95442e.el7.centos is the exact
>> version of vdsm that is in the repo, right?
>>
>
> can't see it in http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/
> el7/x86_64/ while I see it in http://resources.ovirt.org/
> pub/ovirt-4.1-snapshot/rpm/el7/ppc64le/
> so looks like vdsm is building different version of the arch packages.
> This hosuldn't happen.
> Please check vdsm builders / publishers. They should deliver same version
> for both arches or noarch packages will fail dependencies.
>

FYI,
IIRC VDSM ppc64le isn't deployed to experimental because it fails CI due to
mixing noarch pkg built by both ppc64le and x86_64, until this issue will
be resolved on VDSM side ( it was resolved by spec change and reverted )
or the ppc64le build-artifacts job should built only the ppc64le rpms and
not the noarch rpms.

Another possible option which is more complex and requires a major change
in the way we use repoman, is to keep more versions back or not using the
'only-missing' option so having a few versions of VDSM should solve it.
This doesn't affect snapshot repos AFAIK


>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ACTION REQUIRED] [URGENT] ovirt-4.1-snapshot repoclosure is failing due to ovirt-provider-ovn and vdsm

2016-12-21 Thread Sandro Bonazzola
On Wed, Dec 21, 2016 at 9:51 AM, Dan Kenigsberg  wrote:

> On Wed, Dec 21, 2016 at 10:17 AM, Sandro Bonazzola 
> wrote:
> > 00:00:31.874 Num Packages in Repos: 22534
> > 00:00:31.875 package:
> > ovirt-provider-ovn-1.0-1.20161219125609.git.el7.centos.noarch from
> > check-custom-el7
> > 00:00:31.876   unresolved deps:
> > 00:00:31.876  python-openvswitch >= 0:2.6
> > 00:00:31.876  openvswitch-ovn-central >= 0:2.6
> > 00:00:31.876 package:
> > ovirt-provider-ovn-driver-1.0-1.20161219125609.git.el7.centos.noarch
> from
>
> It's good we have repoclosure, as it reminded us we cannot ship
> ovirt-provider-ovn unless we build and ship a version of openvswitch
> from from their master branch, at least until they ship ovs-2.7.
>
> Sandro, Marcin: can we do it? Can we supply our own build of
> openvswitch, like we did for Marcin's blog?
>
> > check-custom-el7
> > 00:00:31.876   unresolved deps:
> > 00:00:31.876  python-openvswitch >= 0:2.6
> > 00:00:31.876  openvswitch-ovn-host >= 0:2.6
> > 00:00:31.877  openvswitch >= 0:2.6
> > 00:00:31.877 package:
> > vdsm-gluster-4.18.999-1162.gite9544ovirt-provider-ovn2e.el7.centos.noarch
> from
> > check-custom-el7
> > 00:00:31.877   unresolved deps:
> > 00:00:31.877  vdsm = 0:4.18.999-1162.gite95442e.el7.centos
>
> All of these seem like repoclosure false warning.
>
> After all, vdsm = 0:4.18.999-1162.gite95442e.el7.centos is the exact
> version of vdsm that is in the repo, right?
>

can't see it in
http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/x86_64/ while I
see it in http://resources.ovirt.org/pub/ovirt-4.1-snapshot/rpm/el7/ppc64le/
so looks like vdsm is building different version of the arch packages. This
hosuldn't happen.
Please check vdsm builders / publishers. They should deliver same version
for both arches or noarch packages will fail dependencies.



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ACTION REQUIRED] [URGENT] ovirt-4.1-snapshot repoclosure is failing due to ovirt-provider-ovn and vdsm

2016-12-21 Thread Dan Kenigsberg
On Wed, Dec 21, 2016 at 10:17 AM, Sandro Bonazzola  wrote:
> 00:00:31.874 Num Packages in Repos: 22534
> 00:00:31.875 package:
> ovirt-provider-ovn-1.0-1.20161219125609.git.el7.centos.noarch from
> check-custom-el7
> 00:00:31.876   unresolved deps:
> 00:00:31.876  python-openvswitch >= 0:2.6
> 00:00:31.876  openvswitch-ovn-central >= 0:2.6
> 00:00:31.876 package:
> ovirt-provider-ovn-driver-1.0-1.20161219125609.git.el7.centos.noarch from

It's good we have repoclosure, as it reminded us we cannot ship
ovirt-provider-ovn unless we build and ship a version of openvswitch
from from their master branch, at least until they ship ovs-2.7.

Sandro, Marcin: can we do it? Can we supply our own build of
openvswitch, like we did for Marcin's blog?

> check-custom-el7
> 00:00:31.876   unresolved deps:
> 00:00:31.876  python-openvswitch >= 0:2.6
> 00:00:31.876  openvswitch-ovn-host >= 0:2.6
> 00:00:31.877  openvswitch >= 0:2.6
> 00:00:31.877 package:
> vdsm-gluster-4.18.999-1162.gite9544ovirt-provider-ovn2e.el7.centos.noarch from
> check-custom-el7
> 00:00:31.877   unresolved deps:
> 00:00:31.877  vdsm = 0:4.18.999-1162.gite95442e.el7.centos

All of these seem like repoclosure false warning.

After all, vdsm = 0:4.18.999-1162.gite95442e.el7.centos is the exact
version of vdsm that is in the repo, right?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] [ACTION REQUIRED] [URGENT] ovirt-4.1-snapshot repoclosure is failing due to ovirt-provider-ovn and vdsm

2016-12-21 Thread Sandro Bonazzola
*00:00:31.874* Num Packages in Repos: 22534*00:00:31.875* package:
ovirt-provider-ovn-1.0-1.20161219125609.git.el7.centos.noarch from
check-custom-el7*00:00:31.876*   unresolved deps: *00:00:31.876*
python-openvswitch >= 0:2.6*00:00:31.876*  openvswitch-ovn-central
>= 0:2.6*00:00:31.876* package:
ovirt-provider-ovn-driver-1.0-1.20161219125609.git.el7.centos.noarch
from check-custom-el7*00:00:31.876*   unresolved deps: *00:00:31.876*
python-openvswitch >= 0:2.6*00:00:31.876*
openvswitch-ovn-host >= 0:2.6*00:00:31.877*  openvswitch >=
0:2.6*00:00:31.877* package:
vdsm-gluster-4.18.999-1162.gite95442e.el7.centos.noarch from
check-custom-el7*00:00:31.877*   unresolved deps: *00:00:31.877*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos*00:00:31.877* package:
vdsm-hook-ethtool-options-4.18.999-1162.gite95442e.el7.centos.noarch
from check-custom-el7*00:00:31.877*   unresolved deps: *00:00:31.878*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos*00:00:31.878*
package: vdsm-hook-extnet-4.18.999-1162.gite95442e.el7.centos.noarch
from check-custom-el7*00:00:31.878*   unresolved deps: *00:00:31.878*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos*00:00:31.878*
package: vdsm-hook-fcoe-4.18.999-1162.gite95442e.el7.centos.noarch
from check-custom-el7*00:00:31.879*   unresolved deps: *00:00:31.879*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos*00:00:31.879*
package: vdsm-hook-ovs-4.18.999-1162.gite95442e.el7.centos.noarch from
check-custom-el7*00:00:31.879*   unresolved deps: *00:00:31.879*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos*00:00:31.879* package:
vdsm-hook-vmfex-dev-4.18.999-1162.gite95442e.el7.centos.noarch from
check-custom-el7*00:00:31.880*   unresolved deps: *00:00:31.880*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos*00:00:31.880* package:
vdsm-tests-4.18.999-1162.gite95442e.el7.centos.noarch from
check-custom-el7*00:00:31.880*   unresolved deps: *00:00:31.880*
vdsm = 0:4.18.999-1162.gite95442e.el7.centos


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel