[ovirt-devel] Proposal: moving to building oVirt RPMs via containers

2021-12-06 Thread Yaniv Kaul
With the move to Github, I think we have an opportunity to move to building
oVirt packages via containers.

You may want to look at https://github.com/gluster/Gluster-Builds -
building Gluster RPMs by running a container (the 'build container'), which
has the required dependencies and is updated once a month, or when there's
a new dependency.
It fits nicely with Github actions or any other system that uses
containers, from your laptop to K8S.

What I personally really like about it is the fact that the Dockerfile
becomes the self-describing build process.

Thoughts?
Y.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2ZM7SRU4TOYGHO3BC6ARH5UPWWXHCVP2/


[ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Yaniv Kaul
On Sat, Sep 29, 2018, 5:03 PM Hetz Ben Hamo  wrote:

> /dev/disk-by-id could be problematic. it only showing disks that have been
> formatted.
>
> For example, I've just created a node with 3 disks and on Anaconda I chose
> only the first disk. After the node installation and reboot, I see on
> /dev/disk/by-id only the DM, and the DVD, not the two unformatted disks
> (which can be seen using lsscsi command).
> Anaconda, however, does see the disks, details etc...
>

That's not what I know. Might be something with udev or some filtering, but
certainly I was not aware it's related to formatting.
Y.


>
> On Sat, Sep 29, 2018 at 12:57 PM Yaniv Kaul  wrote:
>
>>
>>
>> On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo  wrote:
>>
>>> Hi,
>>>
>>> Gobinda, great work!
>>>
>>> One thing though - the device names (sda, sdb etc..)
>>>
>>> On many servers, it's hard to know which disk is which. Lets say I have
>>> 10 spinning disk + 2 SSD's. Which is sda? what about NVME? worse -
>>> sometimes replacing disks replaces the sda to something else. We used to
>>> have the same problem with NICs and now this has been resolved on
>>> CentOS/RHEL 7.X
>>>
>>> Could the HCI part - the disk selection part specifically - give more
>>> details? maybe Disk ID or WWN, or anything that can identify a disk?
>>>
>>
>> /dev/disk/by-id is the right identifier.
>> During installation, it'd be nice if it could show as much data as
>> possible - sdX, /dev/disk/by-id, size and perhaps manufacturer.
>> Y.
>>
>>
>>> Also - SSD caching, most of the time it is recommended to use 2 drives
>>> if possible for good performance. Can a user select X number of drives?
>>>
>>> Thanks
>>>
>>>
>>> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das  wrote:
>>>
>>>> Hi All,
>>>>  Status update on "Hyperconverged Gluster oVirt support"
>>>>
>>>> Features Completed:
>>>> 
>>>>
>>>>   cockpit-ovirt
>>>>   -
>>>>   1- Asymmetric brick configuration.Brick can be configured per host
>>>> basis i.e. If the user wanted to make use of sdb from host1, sdc from
>>>> host2, and sdd from host3.
>>>>   2- Dedupe and Compression integration via VDO support (see
>>>> https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo
>>>> devices
>>>>   3- LVM cache configuration support (Configure cache by using fast
>>>> block device such as SSD drive to imrove the performance of a larger and
>>>> slower logical volumes)
>>>>   4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
>>>> deployment
>>>>   5- Auto creation of storage domains based on gluster volumes created
>>>> during setup
>>>>   6- Single node deployment support via Cockpit UI. For details on
>>>> single node deployment -
>>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/
>>>>   7- Gluster Management Dashboard (Dashboard will show the nodes in
>>>> cluster,Volumes and bricks. User can expand the cluster and also can create
>>>> new volume in existing cluster nodes )
>>>>
>>>>   oVirt
>>>>   ---
>>>>   1- Reset brick support from UI to allow users to replace a faulty
>>>> brick
>>>>   2- Create brick from engine now supports configuring an SSD device as
>>>> lvmcache device when bricks are created on spinning disks
>>>>   3- VDO monitoring
>>>>
>>>>  GlusterFS
>>>> ---
>>>>  Enhancements to performance with fuse by 15x
>>>>  1. Cluster after eager lock change for better detection of multiple
>>>> clients
>>>>  2. Changing qemu option aio to "native" instead of "threads".
>>>>
>>>>  end-to-end deployment:
>>>>  
>>>>  1- End to end deployment of a Gluster + Ovirt hyperconverged
>>>> environment using ansible roles (
>>>> https://github.com/gluster/gluster-ansible/tree/master/playbooks ).
>>>> The only pre-requisite is a CentOS node/oVirt node
>>>>
>>>> Future Plan:
>>>> ==
>>>>  cockpit-ovirt:
>>>>
>>>>   1- ansible-roles integration for deployment
>>>>   2- Support for different v

[ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Yaniv Kaul
On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo  wrote:

> Hi,
>
> Gobinda, great work!
>
> One thing though - the device names (sda, sdb etc..)
>
> On many servers, it's hard to know which disk is which. Lets say I have 10
> spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes
> replacing disks replaces the sda to something else. We used to have the
> same problem with NICs and now this has been resolved on CentOS/RHEL 7.X
>
> Could the HCI part - the disk selection part specifically - give more
> details? maybe Disk ID or WWN, or anything that can identify a disk?
>

/dev/disk/by-id is the right identifier.
During installation, it'd be nice if it could show as much data as possible
- sdX, /dev/disk/by-id, size and perhaps manufacturer.
Y.


> Also - SSD caching, most of the time it is recommended to use 2 drives if
> possible for good performance. Can a user select X number of drives?
>
> Thanks
>
>
> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das  wrote:
>
>> Hi All,
>>  Status update on "Hyperconverged Gluster oVirt support"
>>
>> Features Completed:
>> 
>>
>>   cockpit-ovirt
>>   -
>>   1- Asymmetric brick configuration.Brick can be configured per host
>> basis i.e. If the user wanted to make use of sdb from host1, sdc from
>> host2, and sdd from host3.
>>   2- Dedupe and Compression integration via VDO support (see
>> https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo
>> devices
>>   3- LVM cache configuration support (Configure cache by using fast block
>> device such as SSD drive to imrove the performance of a larger and slower
>> logical volumes)
>>   4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
>> deployment
>>   5- Auto creation of storage domains based on gluster volumes created
>> during setup
>>   6- Single node deployment support via Cockpit UI. For details on single
>> node deployment -
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/
>>   7- Gluster Management Dashboard (Dashboard will show the nodes in
>> cluster,Volumes and bricks. User can expand the cluster and also can create
>> new volume in existing cluster nodes )
>>
>>   oVirt
>>   ---
>>   1- Reset brick support from UI to allow users to replace a faulty brick
>>   2- Create brick from engine now supports configuring an SSD device as
>> lvmcache device when bricks are created on spinning disks
>>   3- VDO monitoring
>>
>>  GlusterFS
>> ---
>>  Enhancements to performance with fuse by 15x
>>  1. Cluster after eager lock change for better detection of multiple
>> clients
>>  2. Changing qemu option aio to "native" instead of "threads".
>>
>>  end-to-end deployment:
>>  
>>  1- End to end deployment of a Gluster + Ovirt hyperconverged environment
>> using ansible roles (
>> https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The
>> only pre-requisite is a CentOS node/oVirt node
>>
>> Future Plan:
>> ==
>>  cockpit-ovirt:
>>
>>   1- ansible-roles integration for deployment
>>   2- Support for different volume types
>>
>>  vdsm:
>>   1- Python3 compatibility of vdsm-gluster
>>   2- Native 4K support
>>
>> --
>> Thanks,
>> Gobinda
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTODFK6NIRPBTRKKYCWKO6KBP/
>>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WTSJBFP73RTJV6EO4XYZUAHNTOVXYBLS/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JK3XRJWJ3ZEKL7NDNG7EYKRKDO2ZGI5Q/


[ovirt-devel] Re: guest agent info about disks and filesystems

2018-08-02 Thread Yaniv Kaul
On Thu, Aug 2, 2018 at 6:39 PM, Scott Dickerson  wrote:

> I have a pair of questions I could use some help with.
>
> 1. Does the oVirt guest agent collect filesystem level information?  I'm
> hoping something like what `df -h` would report.  File systems, free vs
> used space, etc.  Bonus points if we can get that kind of info from linux
> and windows agents.
>

It did in the past, I hope we've removed it!


>
> 2. Assuming that info is available, where can I access it?  I assume it
> isn't surfaced via the REST api (yet), but it would be good to know where
> it is available.
>

Somewhere in DWH.


>
>
> Why? I'm updating VM Portal (web-ui), and it would be great to
> chart/report guest derived data rather than infrastructure disk provisioned
> vs allocated space.
>

In theory. In practice, it was never accurate (LVM layers, not to mention
block, etc.)
Y.


>
> Thanks!
>
> --
> Scott Dickerson
> Senior Software Engineer
> RHV-M Engineering - UX Team
> Red Hat, Inc
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/ZF6QNDMV5VLDPTJCYEQPTLJGJ225R3EA/
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VOAHEVG5A2P2WRTGG6VCXDM7VGPPENEA/


[ovirt-devel] Re: ovirt-system-tests_hc-basic-suite failing due to host not in cluster and incorrect content served by the engine to SDK.

2018-07-08 Thread Yaniv Kaul
On Fri, Jul 6, 2018 at 1:01 PM, Sandro Bonazzola 
wrote:

> https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-4.2/326
>
> fails on add host test with:
>
> Error: The response content type 'text/html; charset=iso-8859-1' isn't the 
> expected XML
>
>
> Something bad happened during the deployment because the engine complains
> about an host not included in the cluster:
>
> 2018-07-05 21:34:47,768-04 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] 
> (DefaultQuartzScheduler6) [3009952a] Could not add brick 
> 'lago-hc-basic-suite-4-2-host1:/rhs/brick1/engine' to volume 
> 'c1146520-3bf7-4b81-b31a-7cc5475b6438' - server uuid 
> '50e37ed8-86f3-4b50-9258-f516169025ea' not found in cluster 
> '3125aa60-80bb-11e8-a143-00163e24d363'
>
>
In[2] we can see:
2018-07-05 22:03:42,975-0400 ERROR (monitor/f6c4ab4) [storage.Monitor]
Error checking domain f6c4ab4a-005d-4ab7-acda-03810014c841 (monitor:424)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
405, in _checkDomainStatus
self.domain.selftest()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 48, in
__getattr__
return getattr(self.getRealDomain(), attrName)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in
_realProduce
domain = self._findDomain(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in
_findDomain
return findMethod(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterSD.py", line
55, in findDomain
return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 391,
in __init__
validateFileSystemFeatures(manifest.sdUUID, manifest.mountpoint)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 104,
in validateFileSystemFeatures
oop.getProcessPool(sdUUID).directTouch(testFilePath)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/outOfProcess.py",
line 320, in directTouch
ioproc.touch(path, flags, mode)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 567,
in touch
self.timeout)
  File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 451,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 30] Read-only file system

And just before that:

2018-07-05 22:03:33,214-0400 INFO  (libvirt/events) [virt.vm]
(vmId='a2f514e6-81ca-4d41-acf9-77cc910f6eaf') abnormal vm stop device
ua-c0592bd6-20e6-4dbf-9610-9a35e3f566ab error eother (vm:5116)
2018-07-05 22:03:33,214-0400 INFO  (libvirt/events) [virt.vm]
(vmId='a2f514e6-81ca-4d41-acf9-77cc910f6eaf') CPU stopped: onIOError
(vm:6157)
2018-07-05 22:03:33,222-0400 INFO  (libvirt/events) [virt.vm]
(vmId='a2f514e6-81ca-4d41-acf9-77cc910f6eaf') CPU stopped: onSuspend
(vm:6157)
2018-07-05 22:03:33,225-0400 WARN  (libvirt/events) [virt.vm]
(vmId='a2f514e6-81ca-4d41-acf9-77cc910f6eaf') device vda reported I/O
error (vm:4065)


And indeed, @[3]:

[2018-07-05 22:04:38,936] WARNING [utils - 298:publish_to_webhook] -
Event push failed to URL:
http://hc-engine:80/ovirt-engine/services/glusterevents, Event:
{"event": "QUORUM_LOST", "message": {"volume": "vmstore"}, "nodeid":
"59bf7956-60a4-4152-9cf9-99fcdccb211f", "ts": 1530842614}, Status:
('Connection aborted.', error(113, 'No route to host'))


And we can also see https://bugzilla.redhat.com/show_bug.cgi?id=1595436
there as well.


Sahina, Gobinda, can you please investigate?
>
> Ondra, no idea why the engine is returning text/html instead of xml here,
> can you please check?
>

Because of the exception[1].
Y.

[1]
https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-4.2/326/artifact/exported-artifacts/test_logs/hc-basic-suite-4.2/post-002_bootstrap.py/lago-hc-basic-suite-4-2-engine/_var_log/ovirt-engine/server.log
[2]
https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-4.2/326/artifact/exported-artifacts/test_logs/hc-basic-suite-4.2/post-002_bootstrap.py/lago-hc-basic-suite-4-2-host0/_var_log/vdsm/vdsm.log
[3]
https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-4.2/326/artifact/exported-artifacts/test_logs/hc-basic-suite-4.2/post-002_bootstrap.py/lago-hc-basic-suite-4-2-host0/_var_log/glusterfs/events.log


>
>
> --
>
> SANDRO BONAZZOLA
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> 

[ovirt-devel] Re: [ovirt-users] Re: Re: oVirt HCI point-to-point interconnection

2018-06-29 Thread Yaniv Kaul
On Fri, Jun 29, 2018 at 12:24 PM, Stefano Zappa 
wrote:

> Hi Alex,
> I have evaluated this approach using the latest version of oVirt Node.
>
> In principle I would like to follow the guidelines of oVirt development
> team, and adopt its best practices, trying to limit excessive
> customizations.
>
> So I installed oVirt Node on 3 hosts and I manually configured the two
> additional interfaces for the point-to-point interconnections on which then
> I configured Gluster.
> This was done very easily, and it works!
>

Sounds like an interesting item for ovirt.org blog!
Y.


>
> I know that SDN is supported by oVirt, even if by default it is bridging
> to the outside.
> I have no experience with the SDN, but I realized that the VxLAN or maybe
> Geneve was the way to go, and now I also have your confirmation.
> Creating Geneve L3 point-to-point tunnels between the nodes, on which then
> configure an L2 overlay network shared by oVirt pool hosts and on which
> then move all internal oVirt communications.
>
> In first analysis this intervention seems structural, not trivial, I think
> it would be better to evaluate it carefully, and if interesting, write a
> best practice or better integrate the functionality in the solution.
> At the end the system must be stable and above all upgradable with
> traditional procedures.
>
> Stefano.
>
>
>
>
> Stefano Zappa
> IT Specialist CAE - TT-3
> Technical Department
>
> Industrie Saleri Italo S.p.A.
> Phone:+39 0308250480
> Fax:+39 0308250466
>
> This message contains confidential information and is intended only for
> rightkickt...@gmail.com, yk...@redhat.com, devel@ovirt.org,
> us...@ovirt.org, sab...@redhat.com, sbona...@redhat.com,
> stira...@redhat.com. If you are not rightkickt...@gmail.com,
> yk...@redhat.com, devel@ovirt.org, us...@ovirt.org, sab...@redhat.com,
> sbona...@redhat.com, stira...@redhat.com you should not disseminate,
> distribute or copy this e-mail. Please notify stefano.za...@saleri.it
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. E-mail transmission cannot be
> guaranteed to be secure or error-free as information could be intercepted,
> corrupted, lost, destroyed, arrive late or incomplete, or contain viruses.
> Stefano Zappa therefore does not accept liability for any errors or
> omissions in the contents of this message, which arise as a result of
> e-mail transmission. If verification is required please request a hard-copy
> version.
> 
> Da: Alex K 
> Inviato: giovedì 28 giugno 2018 21:34
> A: Stefano Zappa
> Cc: Yaniv Kaul; devel; us...@ovirt.org
> Oggetto: Re: [ovirt-users] Re: [ovirt-devel] Re: oVirt HCI point-to-point
> interconnection
>
> Hi,
>
> Network virtualization is already here and widely used. I am still not
> sure what is the gain of this approach. You can do the same with VxLANs or
> Geneve. And how do you scale this? Is it only for 3 node clusters?
>
> Alex
>
> On Thu, Jun 28, 2018, 11:47 Stefano Zappa  stefano.za...@saleri.it>> wrote:
> Hi Yaniv,
> no, the final purpose is not to save a couple of ports.
>
> The benefit is the trend to convergence and consolidation of the
> networking, as already done for the storage.
> The external interconnection is a critical element of the HCI solution, a
> configuration without switches would give more soundness and more
> independence.
>
> If we think about HCI as the transition of IT infrastructure from
> hardware-defined to software-defined, the first step was virtualization of
> the servers, the second step was storage virtualization, the third step
> will be virtualization of networking.
>
> Of course this also has an economic benefit, reduce the total cost of
> ownership and traditional data-center inefficiencies.
>
> We suppose a very traditional and simple external uplink, we can use the
> interfaces integrated into the motherboard.
> This will not be critical for the integrity of the our HCI solution, this
> will not require broad bandwidth and low latency, this will be used
> exclusively to interconnect the solution with the outside world, then with
> a traditional ethernet switch.
>
> We suppose to give more attention to the point-to-point interconnection of
> the three nodes, using 100 or 200 GbE cards for direct interconnection
> without switches.
> The hypothetical purchase of these 3 cards will not be cheaper, sure, but
> it will be twenty times less than introducing expensive switches that will
> certainly have more ports than necessary.
>
> Thanks for your attention,
> Stefano.
>
>
>
>
> Stefano Zappa
> IT Specialist CAE - TT-3

[ovirt-devel] Re: [ovirt-users] oVirt HCI point-to-point interconnection

2018-06-27 Thread Yaniv Kaul
On Wed, Jun 27, 2018 at 11:26 AM, Stefano Zappa 
wrote:

> The final purpose is strictly targeted to your HCI solution with 3-node
> gluster replication.
>

Can you explain to me what the benefit is? You need a switch anyway (for
uplink), so the 'cost saving' is that you can use a 4 port (?) switch and
do not need 6 ports?
Y.


>
>
>
>
> Stefano Zappa
> IT Specialist CAE - TT-3
> Technical Department
>
> Industrie Saleri Italo S.p.A.
> Phone:+39 0308250480
> Fax:+39 0308250466
>
> This message contains confidential information and is intended only for
> sab...@redhat.com, sbona...@redhat.com, stira...@redhat.com,
> us...@ovirt.org, devel@ovirt.org. If you are not sab...@redhat.com,
> sbona...@redhat.com, stira...@redhat.com, us...@ovirt.org, devel@ovirt.org
> you should not disseminate, distribute or copy this e-mail. Please notify
> stefano.za...@saleri.it immediately by e-mail if you have received this
> e-mail by mistake and delete this e-mail from your system. E-mail
> transmission cannot be guaranteed to be secure or error-free as information
> could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. Stefano Zappa therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of e-mail transmission. If verification is required
> please request a hard-copy version.
> 
> Da: Sahina Bose 
> Inviato: mercoledì 27 giugno 2018 10:15
> A: Sandro Bonazzola
> Cc: Stefano Zappa; Simone Tiraboschi; us...@ovirt.org; devel@ovirt.org
> Oggetto: Re: [ovirt-users] oVirt HCI point-to-point interconnection
>
> The point to point interconnect is something we have not explored - I
> think this limits the solution from scaling out to more nodes.
>
> On Wed, Jun 27, 2018 at 1:22 PM, Sandro Bonazzola  > wrote:
> Simone, Sahina, can you please have a look?
>
> 2018-06-07 9:59 GMT+02:00 Stefano Zappa  stefano.za...@saleri.it>>:
>
> Good morning,
> I would like to kindly ask you a question about the feasibility of
> defining a point-to-point interconnection between three ovirt nodes.
>
> Initially with the idea of optimizing the direct communications between
> the nodes and especially the gluster communications, and so it would seem
> quite easy, then evaluating a more complex configuration, assuming to
> create an overlay L2 network on the three L3 point-to-point, using
> techniques like geneve, of which at the moment I have no mastery.
>
> If the direct routing of three nodes to interconnect the public network
> with the private overlay network was not easily doable, we could leave the
> private overlay network isolated from the outside world and connect the VM
> hosted engine directly to the two networks with two adapters.
>
> This layout with direct interconnection of the nodes without switches and
> a shared L2 overlay network between the nodes may in future be contemplated
> in future releases of your HCI solution?
>
> Thank you for your attention, have a nice day!
>
> Stefano Zappa.
>
>
> [cid:609f1f14-f74e-489f-b86d-08647efc6d1c]
>
>
>
>
> Stefano Zappa
>
> IT Specialist CAE - TT-3
>
>
>
> Industrie Saleri Italo S.p.A.
>
> Phone:  +39 0308250480
> Fax:+39 0308250466
>
>
>
> This message contains confidential information and is intended only for
> us...@ovirt.org, in...@ovirt.org ovirt.org>, devel@ovirt.org. If you are not
> us...@ovirt.org, in...@ovirt.org ovirt.org>, devel@ovirt.org you should not
> disseminate, distribute or copy this e-mail. Please notify
> stefano.za...@saleri.it immediately by
> e-mail if you have received this e-mail by mistake and delete this e-mail
> from your system. E-mail transmission cannot be guaranteed to be secure or
> error-free as information could be intercepted, corrupted, lost, destroyed,
> arrive late or incomplete, or contain viruses. Stefano Zappa therefore does
> not accept liability for any errors or omissions in the contents of this
> message, which arise as a result of e-mail transmission. If verification is
> required please request a hard-copy version.
>
> PRIVACY INFORMATION ART. 13 EU REG. 2016/679
> We inform you that the personal data contained in the present and
> subsequent electronic communications will be processed in compliance with
> the EU Regulation 2016/679, for the purpose of and only for the time
> necessary to allow the sender to carry out the activities connected to the
> existing commercial relationships.
> The provision of personal data is not mandatory. However, failure to
> provide them determines the impossibility of achieving the aforementioned
> purpose.
> With regard to these data, is allowed the exercise of the rights set out
> in art. 13 and from the articles from 15 to 22 of EU Regulation 2016/679
> and in 

[ovirt-devel] Re: Can't install latest master (OST) - due to deps failure

2018-06-21 Thread Yaniv Kaul
Ignore - seems to be fixed with https://gerrit.ovirt.org/#/c/92405/

2018-06-21 11:47 GMT+03:00 Yaniv Kaul :

>
> Error: Package: 
> ovirt-host-deploy-java-1.8.0-0.0.master.20180620111438.git925eabd.el7.noarch
> (alocalsync)
>Requires: ovirt-host-deploy = 1.8.0-0.0.master.
> 20180620111438.git925eabd.el7
>Available: 
> ovirt-host-deploy-1.8.0-0.0.master.20180531090832.git9811a30.el7.noarch
> (alocalsync)
>ovirt-host-deploy = 1.8.0-0.0.master.
> 20180531090832.git9811a30.el7
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RYRD6H5FNAZGZCQ6K5GCL4TRSMIVNK5J/


[ovirt-devel] Can't install latest master (OST) - due to deps failure

2018-06-21 Thread Yaniv Kaul
Error: Package:
ovirt-host-deploy-java-1.8.0-0.0.master.20180620111438.git925eabd.el7.noarch
(alocalsync)
   Requires: ovirt-host-deploy =
1.8.0-0.0.master.20180620111438.git925eabd.el7
   Available:
ovirt-host-deploy-1.8.0-0.0.master.20180531090832.git9811a30.el7.noarch
(alocalsync)
   ovirt-host-deploy =
1.8.0-0.0.master.20180531090832.git9811a30.el7
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/B6APHBVUWCLQTCZ65IPD7RNB5SKYUQON/


[ovirt-devel] Re: ovirt-engine travis CI

2018-05-16 Thread Yaniv Kaul
On Wed, May 16, 2018 at 5:12 PM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> 2018-05-16 15:56 GMT+02:00 Yaniv Kaul <yk...@redhat.com>:
>
>>
>>
>> On Wed, May 16, 2018 at 4:04 PM, Barak Korren <bkor...@redhat.com> wrote:
>>
>>>
>>>
>>> On 16 May 2018 at 11:03, Sandro Bonazzola <sbona...@redhat.com> wrote:
>>>
>>>> Hi,
>>>> today I pushed a fix for Travis CI on oVirt Engine. While discussing
>>>> the fix it has been asked why we need Travis CI in first place, isn't our
>>>> Jenkins testing enough?
>>>>
>>>>
>>> I'm wondering that as well...
>>>
>>
>> What does it do that we don't do on our CI?
>>
>
> commit 0734e8cb9c7d1411b613f1262e81c6845b18f5d9
> Author: Roman Mohr <rm...@redhat.com>
> Date:   Mon Jul 18 12:08:59 2016 +0200
>
> Sonarqube moved to https://sonarqube.com
>
> Change-Id: I1ce975222845bed6836464bbf73732bbcb9eff1d
> Signed-off-by: Roman Mohr <rm...@redhat.com>
>
> commit c703d194a4ed84541fb16b0aec73f1597ceaa05f
> Author: Roman Mohr <rm...@redhat.com>
> Date:   Fri Jun 3 08:28:56 2016 +0200
>
> Print a keep alive message every minute on travis builds
>
> For one hour print every minute a keep alive message to stdout to keep
> the travis job alive.
>
> Sonar analysis takes more than 10 minutes. In this time period no
> output
> is produced. Travis assumes that builds are somehow broken if there was
> no output for more than 10 minutes.
>
> Change-Id: Ia2d048e42ee410eb72a35f2793935757308c7765
> Signed-off-by: Roman Mohr <rm...@redhat.com>
>
> commit fe4743ce5ffadb663d3fda58d09e9906b35b91ee
> Author: Roman Mohr <rm...@redhat.com>
> Date:   Wed May 25 17:05:04 2016 +0200
>
> Limit sonar analysis to master branch
>
> Sonar by default does not differentiate between different branches.
> Limit the analysis to master to get untainted reports.
>
> Change-Id: Icab31f0aaf98adc88214ed70d7276f13eac188ee
> Signed-off-by: Roman Mohr <rm...@redhat.com>
>
> According to commit 768154debecccbc93524cd4e6c5092047d2a4eab by Roman
> Mohr on Sat May 7 05:39:18 2016 +0200
> it's doing Sonar code analysis on travis and then upload the results to
> sonarqube.org
> It generates this report: https://sonarcloud.io/
> dashboard?id=org.ovirt.engine%3Aroot
> Which has been updated last time on November 21, 2017 probably due to the
> switch to PostgreSQL 9.5
> I've no rights on that report, maybe Roman can grant some more privileges
> so I can try to get it back to work if needed.
>
>
>
>> What was broken there?
>>
>
> It's failing on DAO tests.
> It was failing initializing the database because it was still using
> postgresql 9.2 instead of 9.5.
> Now it's failing on missing  uuid_generate_v1() function.
>
>
>
>> Can it replace some of what we do in our CI (and does it have value, as
>> it is a mirror of published code already) ?
>>
>
> As far as I can tell the only gain is the sonarqube analysis of the code.
> I'm not sure if we can integrate this with Jenkins.
>

Who's looking at the analysis results?
Y.


>
>
>
>
>
>> Y.
>>
>>
>>>
>>>
>>>
>>>
>>> --
>>> Barak Korren
>>> RHV DevOps team , RHCE, RHCi
>>> Red Hat EMEA
>>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>>
>>> ___
>>> Infra mailing list -- in...@ovirt.org
>>> To unsubscribe send an email to infra-le...@ovirt.org
>>>
>>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA <https://www.redhat.com/>
>
> sbona...@redhat.com
> <https://red.ht/sig>
> <https://redhat.com/summit>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org


[ovirt-devel] Re: ovirt-engine travis CI

2018-05-16 Thread Yaniv Kaul
On Wed, May 16, 2018 at 4:04 PM, Barak Korren  wrote:

>
>
> On 16 May 2018 at 11:03, Sandro Bonazzola  wrote:
>
>> Hi,
>> today I pushed a fix for Travis CI on oVirt Engine. While discussing the
>> fix it has been asked why we need Travis CI in first place, isn't our
>> Jenkins testing enough?
>>
>>
> I'm wondering that as well...
>

What does it do that we don't do on our CI? What was broken there?
Can it replace some of what we do in our CI (and does it have value, as it
is a mirror of published code already) ?
Y.


>
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
> ___
> Infra mailing list -- in...@ovirt.org
> To unsubscribe send an email to infra-le...@ovirt.org
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org


Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-24 Thread Yaniv Kaul
On Tue, Apr 24, 2018 at 2:00 PM, Dan Kenigsberg  wrote:

> Ravi's patch is in, but a similar problem remains, and the test cannot
> be put back into its place.
>
> It seems that while Vdsm was taken down, a couple of getCapsAsync
>

- Why is VDSM down?
Y.


> requests queued up. At one point, the host resumed its connection,
> before the requests have been cleared of the queue. After the host is
> up, the following tests resume, and at a pseudorandom point in time,
> an old getCapsAsync request times out and kills our connection.
>
> I believe that as long as ANY request is on flight, the monitoring
> lock should not be released, and the host should not be declared as
> up.
>
>
> On Wed, Apr 11, 2018 at 1:04 AM, Ravi Shankar Nori 
> wrote:
> > This [1] should fix the multiple release lock issue
> >
> > [1] https://gerrit.ovirt.org/#/c/90077/
> >
> > On Tue, Apr 10, 2018 at 3:53 PM, Ravi Shankar Nori 
> wrote:
> >>
> >> Working on a patch will post a fix
> >>
> >> Thanks
> >>
> >> Ravi
> >>
> >> On Tue, Apr 10, 2018 at 9:14 AM, Alona Kaplan 
> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> Looking at the log it seems that the new GetCapabilitiesAsync is
> >>> responsible for the mess.
> >>>
> >>> - 08:29:47 - engine loses connectivity to host
> >>> 'lago-basic-suite-4-2-host-0'.
> >>>
> >>> - Every 3 seconds a getCapabalititiesAsync request is sent to the host
> >>> (unsuccessfully).
> >>>
> >>>  * before each "getCapabilitiesAsync" the monitoring lock is taken
> >>> (VdsManager,refreshImpl)
> >>>
> >>>  * "getCapabilitiesAsync" immediately fails and throws
> >>> 'VDSNetworkException: java.net.ConnectException: Connection refused'.
> The
> >>> exception is caught by
> >>> 'GetCapabilitiesAsyncVDSCommand.executeVdsBrokerCommand' which calls
> >>> 'onFailure' of the callback and re-throws the exception.
> >>>
> >>>  catch (Throwable t) {
> >>> getParameters().getCallback().onFailure(t);
> >>> throw t;
> >>>  }
> >>>
> >>> * The 'onFailure' of the callback releases the "monitoringLock"
> >>> ('postProcessRefresh()->afterRefreshTreatment()-> if (!succeeded)
> >>> lockManager.releaseLock(monitoringLock);')
> >>>
> >>> * 'VdsManager,refreshImpl' catches the network exception, marks
> >>> 'releaseLock = true' and tries to release the already released lock.
> >>>
> >>>   The following warning is printed to the log -
> >>>
> >>>   WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> >>> (EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Trying to
> release
> >>> exclusive lock which does not exist, lock key:
> >>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
> >>>
> >>>
> >>> - 08:30:51 a successful getCapabilitiesAsync is sent.
> >>>
> >>> - 08:32:55 - The failing test starts (Setup Networks for setting ipv6).
> >>>
> >>>
> >>> * SetupNetworks takes the monitoring lock.
> >>>
> >>> - 08:33:00 - ResponseTracker cleans the getCapabilitiesAsync requests
> >>> from 4 minutes ago from its queue and prints a VDSNetworkException: Vds
> >>> timeout occured.
> >>>
> >>>   * When the first request is removed from the queue
> >>> ('ResponseTracker.remove()'), the 'Callback.onFailure' is invoked (for
> the
> >>> second time) -> monitoring lock is released (the lock taken by the
> >>> SetupNetworks!).
> >>>
> >>>   * The other requests removed from the queue also try to release
> the
> >>> monitoring lock, but there is nothing to release.
> >>>
> >>>   * The following warning log is printed -
> >>> WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager]
> >>> (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Trying to
> release
> >>> exclusive lock which does not exist, lock key:
> >>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
> >>>
> >>> - 08:33:00 - SetupNetwork fails on Timeout ~4 seconds after is started.
> >>> Why? I'm not 100% sure but I guess the late processing of the
> >>> 'getCapabilitiesAsync' that causes losing of the monitoring lock and
> the
> >>> late + mupltiple processing of failure is root cause.
> >>>
> >>>
> >>> Ravi, 'getCapabilitiesAsync' failure is treated twice and the lock is
> >>> trying to be released three times. Please share your opinion regarding
> how
> >>> it should be fixed.
> >>>
> >>>
> >>> Thanks,
> >>>
> >>> Alona.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On Sun, Apr 8, 2018 at 1:21 PM, Dan Kenigsberg 
> wrote:
> 
>  On Sun, Apr 8, 2018 at 9:21 AM, Edward Haas  wrote:
> >
> >
> >
> > On Sun, Apr 8, 2018 at 9:15 AM, Eyal Edri  wrote:
> >>
> >> Was already done by Yaniv - https://gerrit.ovirt.org/#/c/89851.
> >> Is it still failing?
> >>
> >> On Sun, Apr 8, 2018 at 8:59 AM, Barak Korren 
> >> wrote:
> >>>
> >>> On 7 April 2018 at 00:30, Dan Kenigsberg 
> wrote:
> 

Re: [ovirt-devel] OST ha_recovery test failing

2018-04-23 Thread Yaniv Kaul
On Mon, Apr 23, 2018 at 3:10 PM, Dafna Ron  wrote:

> Tal, can you have a look at the logs? http://jenkins.ovirt.org/job/
> ovirt-system-tests_standard-check-patch/363/artifact/
> exported-artifacts/check-patch.basic_suite_master.el7.
> x86_64/test_logs/basic-suite-master/post-004_basic_sanity.py/
>
> I can see in the logs that the vm fails to re-start with vm path does not
> exist.
> but what I found odd is that we try to start the vm on the same host that
> it ran on before and I think that the issue is actually that we set the HA
> vm as a pin to host vm (which should not be a possibility since HA should
> start the vm on a different host).
>
> Can you please have a look?
>

Is it still the issue around the high-perf VM? It has to be (on one hand)
pinned to a single host (if we define NUMA topo for it) and on the other
hand cannot be used for HA recovery test...
Can you revert the whole commit around high perf VM? I assume we'll need to
re-think on how to do it.
Y.


> Thanks.
> Dafna
>
>
> On Mon, Apr 23, 2018 at 12:34 PM, Greg Sheremeta 
> wrote:
>
>> I'm seeing this fail periodically on a patch I'm working on, and it's not
>> related to my work. Any ideas?
>>
>> 
>>
>> > in ha_recovery
>> lambda:
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>> 271, in assert_true_within_long
>> assert_equals_within_long(func, True, allowed_exceptions)
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>> 258, in assert_equals_within_long
>> func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
>> 237, in assert_equals_within
>> '%s != %s after %s seconds' % (res, value, timeout)
>> 'False != True after 600 seconds
>>
>>
>> http://jenkins.ovirt.org/job/ovirt-system-tests_standard-che
>> ck-patch/363/artifact/exported-artifacts/check-patch.basic_
>> suite_master.el7.x86_64/004_basic_sanity.py.junit.xml
>>
>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> 
>>
>> gsher...@redhat.comIRC: gshereme
>> 
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failure: 002_bootstrap.configure_high_perf_vm2

2018-04-21 Thread Yaniv Kaul
On Fri, Apr 20, 2018 at 10:23 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Fri, Apr 20, 2018 at 9:29 PM, Milan Zamazal <mzama...@redhat.com>
> wrote:
>
>> Hi, I experienced the following OST failure on my OST patch:
>>
>> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check
>> -patch-el7-x86_64/5217/
>>
>> The failure is unrelated to my patch, since it occurred before reaching
>> its changes.  OST passed when I retriggered it later.
>>
>> The error was:
>>
>> 2018-04-20 05:08:46,748-04 WARN  
>> [org.ovirt.engine.core.bll.numa.vm.AddVmNumaNodesCommand]
>> (default task-6) [a8a3d723-655c-43ba-9776-70eb9317e66c] Validation of
>> action 'AddVmNumaNodes' failed for user admin@internal-authz. Reasons:
>> ACTION_TYPE_FAILED_VM_PINNED_TO_MULTIPLE_HOSTS
>> 2018-04-20 05:08:46,753-04 ERROR [org.ovirt.engine.api.restapi.
>> resource.AbstractBackendResource] (default task-6) [] Operation Failed:
>> [Cannot ${action} ${type}. VM must be pinned to a single host.]
>>
>
> If this is indeed a limitation (not sure why), I indeed assume it's a race
> - I take all hosts that are up and pin it to them. Sometimes there's only 1
> up, sometimes more than 1. I'll post a patch.
>

I've sent a patch[1] - now obviously it's failing with ha_recovery test...
Y.

[1] https://gerrit.ovirt.org/#/c/90500/

> Y.
>
> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST failure: 002_bootstrap.configure_high_perf_vm2

2018-04-20 Thread Yaniv Kaul
On Fri, Apr 20, 2018 at 9:29 PM, Milan Zamazal  wrote:

> Hi, I experienced the following OST failure on my OST patch:
>
> http://jenkins.ovirt.org/job/ovirt-system-tests_master_
> check-patch-el7-x86_64/5217/
>
> The failure is unrelated to my patch, since it occurred before reaching
> its changes.  OST passed when I retriggered it later.
>
> The error was:
>
> 2018-04-20 05:08:46,748-04 WARN  
> [org.ovirt.engine.core.bll.numa.vm.AddVmNumaNodesCommand]
> (default task-6) [a8a3d723-655c-43ba-9776-70eb9317e66c] Validation of
> action 'AddVmNumaNodes' failed for user admin@internal-authz. Reasons:
> ACTION_TYPE_FAILED_VM_PINNED_TO_MULTIPLE_HOSTS
> 2018-04-20 05:08:46,753-04 ERROR 
> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default task-6) [] Operation Failed: [Cannot ${action} ${type}. VM must be
> pinned to a single host.]
>

If this is indeed a limitation (not sure why), I indeed assume it's a race
- I take all hosts that are up and pin it to them. Sometimes there's only 1
up, sometimes more than 1. I'll post a patch.
Y.

___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Yaniv Kaul
On Wed, Apr 11, 2018 at 3:27 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:
>
>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>>
>>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>>
 Please make sure to run as much OST suites on this patch as possible
 before merging ( using 'ci please build' )

>>>
>>> But note that OST is not a way to verify the patch.
>>>
>>> Such changes require testing with all storage types we support.
>>>
>>
>> Well, we already have HE suite that runs on ISCSI, so at least we have
>> NFS+ISCSI on nested,
>> for real storage testing, you'll have to do it manually
>>
>
> We need glusterfs (both native and fuse based), and cinder/ceph storage.
>

We have Gluster in o-s-t as well, as part of the HC suite. It doesn't use
Fuse though.


>
> But we cannot practically test all flows with all types of storage for
> every patch.
>

Indeed. But we could add easily do some, and we should at least execute the
minimal set that we are able to easily via o-s-t.
Y.

>
> Nir
>
>
>>
>>
>>>
>>> Nir
>>>
>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
 wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>
> mpolednik
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



 --

 Eyal edri


 MANAGER

 RHV DevOps

 EMEA VIRTUALIZATION R


 Red Hat EMEA 
  TRIED. TESTED. TRUSTED.
 
 phone: +972-9-7692018 <+972%209-769-2018>
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt log analyzer

2018-04-02 Thread Yaniv Kaul
On Thu, Mar 29, 2018 at 9:11 PM, Tomas Jelinek  wrote:

>
>
> On Thu, Mar 29, 2018 at 7:55 PM, Greg Sheremeta 
> wrote:
>
>> Nice! I think a nice RFE would be to surface this info in the UI.
>>
>> On Thu, Mar 29, 2018 at 8:30 AM, Milan Zamazal 
>> wrote:
>>
>>> Hi, during last year Outreachy internship a tool for analyzing oVirt
>>> logs was created.  When it is provided with oVirt logs (such as SOS
>>> reports, logs gathered by Lago, single or multiple log files) it tries
>>> to identify and classify important lines from the logs and present them
>>> in a structured form.  Its primary purpose is to get a quick and easy
>>> overview of actions and errors.
>>>
>>
> I would add that it can correlate more log files (from
> engine/vdsm/libvirt/quemu) and show a unified view of them.
> It can follow the life of one entity (such as a VM) and show what was
> going on with it across the system. I have used it a lot to look for races
> and it was pretty useful for that.
>

This is not very clear from the readme, which only says 'Assuming your *oVirt
logs* are stored in DIRECTORY ' - what logs exactly are in that directory?
Is that the result of logs from ovirt-log-collector ?
Y.


>
>>
>>> The tool analyses given logs and produces text files with the extracted
>>> information.  There is an Emacs user interface that presents the output
>>> in a nice way with added functionality such as filtering.  Emacs haters
>>> can use the plain text files or write another user interface. :-)
>>>
>>> You can get ovirt-log-analyzer from
>>> https://github.com/mz-pdm/ovirt-log-analyzer
>>> README.md explains how to use it.
>>>
>>> Note that ovirt-log-analyzer has been created within the limited
>>> resources of an Outreachy internship with some additional work and not
>>> everything is perfect.  Feel free to make improvements.
>>>
>>> Regards,
>>> Milan
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> GREG SHEREMETA
>>
>> SENIOR SOFTWARE ENGINEER - TEAM LEAD - RHV UX
>>
>> Red Hat NA
>>
>> 
>>
>> gsher...@redhat.comIRC: gshereme
>> 
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (Ovirt-ngine) ] [ 19-03-2018 ] [ 002_bootstrap.verify_notifier ]

2018-03-20 Thread Yaniv Kaul
On Tue, Mar 20, 2018 at 9:49 AM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

> I’m not sure what is that test actually testing, if it depends on the
> previous host action which fails but is not verified it still may be
> relevant to Shmuel’s patch
> adding author of the test and the notifier owner
>

It is checking that the notifier works - which sends SNMP notifications on
our events. I happen to picked an event which is VDC_STOP - which happens
when the engine is restarted - which happens earlier, when we configure it.
Y.


>
> On 19 Mar 2018, at 13:06, Dafna Ron  wrote:
>
> Hi,
>
> We had a failure in test 002_bootstrap.verify_notifier.
> I can't see anything wrong with the notifier and I don't think it should
> be related to the change that was reported.
>
> the test itself is looking for vdc_stop in messages which I do not indeed
> see but I am not sure what is the cause and how the reported change related
> to the failure.
>
> Can you please take a look?
>
>
>
> *Link and headline of suspected patches: *
>
>
>
>
>
>
> *core: USB in osinfo configuration depends on chipset -
> https://gerrit.ovirt.org/#/c/88777/
> Link to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6429/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6429/artifacts
> (Relevant)
> error snippet from the log: *
> *this is the error from *the api:
>
>
> Error Message
>
> Failed grep for VDC_STOP with code 1. Output:
>  >> begin captured logging << 
> lago.ssh: DEBUG: start task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh 
> client for lago-basic-suite-master-engine:
> lago.ssh: DEBUG: end task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client 
> for lago-basic-suite-master-engine:
> lago.ssh: DEBUG: Running 1cce7c0c on lago-basic-suite-master-engine: grep 
> VDC_STOP /var/log/messages
> lago.ssh: DEBUG: Command 1cce7c0c on lago-basic-suite-master-engine returned 
> with 1
> - >> end captured logging << -
>
> Stacktrace
>
>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
> wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>  line 1456, in verify_notifier
> 'Failed grep for VDC_STOP with code {0}. Output: {1}'.format(result.code, 
> result.out)
>   File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line 29, in 
> eq_
> raise AssertionError(msg or "%r != %r" % (a, b))
> 'Failed grep for VDC_STOP with code 1. Output: \n >> 
> begin captured logging << \nlago.ssh: DEBUG: start 
> task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client for 
> lago-basic-suite-master-engine:\nlago.ssh: DEBUG: end 
> task:f1231b27-f796-406c-8618-17b0868725bc:Get ssh client for 
> lago-basic-suite-master-engine:\nlago.ssh: DEBUG: Running 1cce7c0c on 
> lago-basic-suite-master-engine: grep VDC_STOP /var/log/messages\nlago.ssh: 
> DEBUG: Command 1cce7c0c on lago-basic-suite-master-engine returned with 
> 1\n- >> end captured logging << -'
>
>
> **
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (otopi) ] [ 13-03-2018 ] [ 002_bootstrap.verify_add_hosts + 002_bootstrap.add_hosts ]

2018-03-13 Thread Yaniv Kaul
On Mar 13, 2018 6:11 PM, "Dafna Ron"  wrote:

Hi,

CQ reported failure on both basic and upgrade suites in otpi project.

*Link and headline of suspected patches: **core: Use python3 when possible-
https://gerrit.ovirt.org/#/c/87276/ *


Please revert.
Y.





*Link to
Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/6285/
Link
to all logs:*http://jenkins.ovirt.org/job/ovirt-master_change-queue-
tester/6285/artifacts/


























*(Relevant) error snippet from the log: at
org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand.lambda$executeCommand$2(AddVdsCommand.java:217)
[bll.jar:]at
org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:96)
[utils.jar:]at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[rt.jar:1.8.0_161]at
java.util.concurrent.FutureTask.run(FutureTask.java:266)
[rt.jar:1.8.0_161]at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[rt.jar:1.8.0_161]at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[rt.jar:1.8.0_161]at java.lang.Thread.run(Thread.java:748)
[rt.jar:1.8.0_161]at
org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
[javax.enterprise.concurrent-1.0.jar:]at
org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78)2018-03-13
09:22:40,287-04 ERROR [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(VdsDeploy) [7e847b89] Error during deploy dialog2018-03-13 09:22:40,288-04
DEBUG [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
task-25) [] Entered SsoRestApiAuthFilter2018-03-13 09:22:40,288-04 DEBUG
[org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-25)
[] SsoRestApiAuthFilter authenticating with sso2018-03-13 09:22:40,288-04
DEBUG [org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default
task-25) [] SsoRestApiAuthFilter authenticating using BEARER
header2018-03-13 09:22:40,290-04 DEBUG
[org.ovirt.engine.core.aaa.filters.SsoRestApiAuthFilter] (default task-25)
[] SsoRestApiAuthFilter successfully authenticated using BEARER
header2018-03-13 09:22:40,290-04 DEBUG
[org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter] (default
task-25) [] Entered SsoRestApiNegotiationFilter2018-03-13 09:22:40,292-04
DEBUG [org.ovirt.engine.core.aaa.filters.SsoRestApiNegotiationFilter]
(default task-25) [] SsoRestApiNegotiationFilter Not performing Negotiate
Auth2018-03-13 09:22:40,297-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-2) [7e847b89] Error during host
lago-basic-suite-master-host-1 install2018-03-13 09:22:40,311-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-2) [7e847b89] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during
installation of Host lago-basic-suite-master-host-1: Unexpected error
during execution: /tmp/ovirt-8LoLZqxUw8/otopi: line 29:
/tmp/ovirt-8LoLZqxUw8/otopi-functions: No such file or directory.2018-03-13
09:22:40,311-04 ERROR [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-2) [7e847b89] Error during host
lago-basic-suite-master-host-1 install, preferring first exception:
Unexpected connection termination2018-03-13 09:22:40,311-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-2) [7e847b89] Host installation
failed for host '9cca064c-4034-4428-9cc5-d4e902083dfe',
'lago-basic-suite-master-host-1': Unexpected connection termination*

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-users] ovirt-system-tests hackathon report

2018-03-13 Thread Yaniv Kaul
On Mar 13, 2018 6:27 PM, "Sandro Bonazzola" <sbona...@redhat.com> wrote:

4 people accepted calendar invite:
- Devin A. Bougie
- Francesco Romani
- Jiri Belka
- suporte, logicworks

4 people tentatively accepted calendar invite:
- Amnon Maimon
- Andreas Bleischwitz
- Arnaud Lauriou
- Stephen Pesini

2 mailing lists accepted calendar invite: us...@ovirt.org, devel@ovirt.org
(don't ask me how) so I may have missed someone in above list


4 patches got merged:
Add check for host update to the 1st host. <https://gerrit.ovirt.org/88767>
Merged Yaniv Kaul
<https://gerrit.ovirt.org/#/q/owner:ykaul%2540redhat.com+status:merged>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
(add_upgrade_check)
<https://gerrit.ovirt.org/#/q/status:merged+project:ovirt-system-tests+branch:master+topic:add_upgrade_check>
4:10
PM
basic-suite-master: add vnic_profile_mappings to register vm
<https://gerrit.ovirt.org/87438> Merged Eitan Raviv
<https://gerrit.ovirt.org/#/q/owner:eraviv%2540redhat.com+status:merged>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
(register-template-vnic-mapping)
<https://gerrit.ovirt.org/#/q/status:merged+project:ovirt-system-tests+branch:master+topic:register-template-vnic-mapping>
2:50
PM
Revert "ovirt-4.2: Skipping 002_bootstrap.update_default_cluster"
<https://gerrit.ovirt.org/1> Merged Eyal Edri
<https://gerrit.ovirt.org/#/q/owner:eedri%2540redhat.com+status:merged>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
<https://gerrit.ovirt.org/#/q/status:merged+project:ovirt-system-tests+branch:master>
11:36
AM
seperate 4.2 tests and utils from master <https://gerrit.ovirt.org/88878>
Merged Eyal Edri
<https://gerrit.ovirt.org/#/q/owner:eedri%2540redhat.com+status:merged>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
<https://gerrit.ovirt.org/#/q/status:merged+project:ovirt-system-tests+branch:master>
11:35
AM

13 patches has been pushed / reviewed / rebased

Add gdeploy to ovirt-4.2.repo <https://gerrit.ovirt.org/88929>
Daniel Belenky
<https://gerrit.ovirt.org/#/q/owner:dbelenky%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
<https://gerrit.ovirt.org/#/q/status:open+project:ovirt-system-tests+branch:master>
4:53
PM
Cleanup of test code - next() replaced with any()
<https://gerrit.ovirt.org/88928>
Martin Sivák
<https://gerrit.ovirt.org/#/q/owner:msivak%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
<https://gerrit.ovirt.org/#/q/status:open+project:ovirt-system-tests+branch:master>
4:51
PM
Add network queues custom property and use it in the vnic profile for VM0
<https://gerrit.ovirt.org/88829>
Yaniv Kaul
<https://gerrit.ovirt.org/#/q/owner:ykaul%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
(multi_queue_config)
<https://gerrit.ovirt.org/#/q/status:open+project:ovirt-system-tests+branch:master+topic:multi_queue_config>
4:49
PM
new suite: he-basic-iscsi-suite-master <https://gerrit.ovirt.org/85838>
Yuval Turgeman
<https://gerrit.ovirt.org/#/q/owner:yturgema%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
(he-basic-iscsi-suite-master)
<https://gerrit.ovirt.org/#/q/status:open+project:ovirt-system-tests+branch:master+topic:he-basic-iscsi-suite-master>
4:47
PM
Collect host-deploy bundle from the engine <https://gerrit.ovirt.org/88925>
Yedidyah Bar David
<https://gerrit.ovirt.org/#/q/owner:didi%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
<https://gerrit.ovirt.org/#/q/status:open+project:ovirt-system-tests+branch:master>
4:41
PM
network-suite-master: Make openstack_client_config fixture available to all
... <https://gerrit.ovirt.org/88029> Merge Conflict Marcin Mirecki
<https://gerrit.ovirt.org/#/q/owner:mmirecki%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
<https://gerrit.ovirt.org/#/q/status:open+project:ovirt-system-tests+branch:master>
3:39
PM
new suite: he-basic-ng-ansible-suite-master <https://gerrit.ovirt.org/88901>
Sandro Bonazzola
<https://gerrit.ovirt.org/#/q/owner:sbonazzo%2540redhat.com+status:open>
ovirt-system-tests
<https://gerrit.ovirt.org/#/projects/ovirt-system-tests,dashboards/default>
master
(he-bas

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (ovirt-hosted-engine-setup) ] [ 08-03-2018 ] [ 004_basic_sanity.run_vms ]

2018-03-08 Thread Yaniv Kaul
On Thu, Mar 8, 2018 at 1:53 PM, Dafna Ron  wrote:

> Hi,
>
> We have a failed test on ovirt-hosted-engine-setup basic suite.
> We failed to run a vm with internal engine error.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Link and headline of suspected patches: Add engine fqdn to inventory and
> allow dynamic inventory scripts - https://gerrit.ovirt.org/#/c/88622/
> Link to
> Job:http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1064/
> Link to
> all
> logs:http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1064/artifact/
> (Relevant)
> error snippet from the log: 2018-03-07 17:03:38,374-05 INFO
> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-21)
> [a514da44-d44f-46b9-8bdf-08ba2d513929] Running command: RunVmOnceCommand
> internal: false. Entities affected :  ID:
> 634c6a46-d057-4509-be3b-710716cbd56d Type: VMAction group RUN_VM with role
> type USER,  ID: 634c6a46-d057-4509-be3b-710716cbd56d Type: VMAction group
> EDIT_ADMIN_VM_PROPERTIES with role type ADMIN2018-03-07 17:03:38,379-05
> DEBUG [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-21) [a514da44-d44f-46b9-8bdf-08ba2d513929] method:
> getVmManager, params: [634c6a46-d057-4509-be3b-710716cbd56d], timeElapsed:
> 5ms2018-03-07 17:03:38,391-05 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-21) [a514da44-d44f-46b9-8bdf-08ba2d513929] method:
> getAllForClusterWithStatus, params: [14cad400-49a0-44e0-ab15-9da778f08082,
> Up], timeElapsed: 7ms2018-03-07 17:03:38,408-05 DEBUG
> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> (default task-21) [a514da44-d44f-46b9-8bdf-08ba2d513929] method:
> getVdsManager, params: [8ef2f490-1b76-46e0-b9fe-f3412a36e03b], timeElapsed:
> 0ms2018-03-07 17:03:38,419-05 DEBUG
> [org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator]
> (default task-21) [a514da44-d44f-46b9-8bdf-08ba2d513929] Translating
> SQLException with SQL state '23505', error code '0', message [ERROR:
> duplicate key value violates unique constraint "name_server_pkey"*
>

A quick Google search on the above seems to point to
https://bugzilla.redhat.com/show_bug.cgi?id=1547070 , which says:
Steps to Reproduce:
1. Refresh caps during start VM operation

So might be relevant, I think.
Y.


>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *  Detail: Key (dns_resolver_configuration_id,
> address)=(8a941b6a-83aa-44de-9800-2a1ea6e8e029, 192.168.200.1) already
> exists.  Where: SQL statement "INSERT INTOname_server(
> address,  position,  dns_resolver_configuration_id)VALUES
> (  v_address,  v_position,
> v_dns_resolver_configuration_id)"PL/pgSQL function
> insertnameserver(uuid,character varying,smallint) line 3 at SQL statement];
> SQL was [{call insertnameserver(?, ?, ?)}] for task
> [CallableStatementCallback]2018-03-07 17:03:38,420-05 ERROR
> [org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-21)
> [a514da44-d44f-46b9-8bdf-08ba2d513929] Command
> 'org.ovirt.engine.core.bll.RunVmOnceCommand' failed:
> CallableStatementCallback; SQL [{call insertnameserver(?, ?, ?)}]; ERROR:
> duplicate key value violates unique constraint "name_server_pkey"  Detail:
> Key (dns_resolver_configuration_id,
> address)=(8a941b6a-83aa-44de-9800-2a1ea6e8e029, 192.168.200.1) already
> exists.  Where: SQL statement "INSERT INTOname_server(
> address,  position,  dns_resolver_configuration_id)VALUES
> (  v_address,  v_position,
> v_dns_resolver_configuration_id)"PL/pgSQL function
> insertnameserver(uuid,character varying,smallint) line 3 at SQL statement;
> nested exception is org.postgresql.util.PSQLException: ERROR: duplicate key
> value violates unique constraint "name_server_pkey"  Detail: Key
> (dns_resolver_configuration_id,
> address)=(8a941b6a-83aa-44de-9800-2a1ea6e8e029, 192.168.200.1) already
> exists.  Where: SQL statement "INSERT INTOname_server(
> address,  position,  dns_resolver_configuration_id)VALUES
> (  v_address,  v_position,
> v_dns_resolver_configuration_id)"PL/pgSQL function
> insertnameserver(uuid,character varying,smallint) line 3 at SQL
> 

Re: [ovirt-devel] hosted-engine ovirt-system-tests

2018-03-05 Thread Yaniv Kaul
On Mon, Mar 5, 2018 at 3:01 PM, Yedidyah Bar David  wrote:

> Hi all,
>
> Recently I have been pushing various patches to OST in order to verify
> specific bugs/fixes I was working on, using them with the "manual"
> jenkins job but with no immediate intention to get them merged. Main
> reason is that it's not clear what's the best approach to get such
> tests merged. Do we want a new suite? How often does it run? How long
> does it take? Do we have enough resources? etc.
>

I'll answer generally, since specifically for each test we'll need to see.
The guidelines that I have (in my head, not written down anywhere) are
really:
1. Be efficient in resource - HW mainly. My guideline is quite simple - can
it run on my laptop (8GB of RAM).
Not everything can fit of course on my laptop (the performance suite, the
upcoming OpenShift on oVirt suite, etc.), but I try.
2. Be quick - whatever we can test in parallel to other tests, is much
preferred. We have quite a bit of 'dead time' between tests, we need to use
them.
3. Bend the rules, but don't cheat - I move the yum cache repo to /dev/shm,
etc. - to make things quicker, but I don't cheat and install deps. ahead of
time, etc.
4. Most suites do not run too often (several times a day) - I think it's OK
to add to those that run once a day or so.
5. Strive for others (QE!) to contribute to the suite. The more we can
collaborate, the better.
6. Generally, we have enough gaps in our positive tests, that I rather not
introduce negative tests.
7. Whatever we can do in order to ensure QE does not get a dead-on-arrival
or broken functionality build - the better.



>
> Do we have plans re this? Bugs/tickets/etc.?
>

Bugzilla, but I did not see anything there for quite some time.


>
> Do we want to do something?
>
> Specifically, do we want (eventually) many more suites? This seems
> unmanageable. A few suites, perhaps (also) using lago snapshots?
> Didn't try that myself, might be useful and relevant.
>
> Some examples:
> https://gerrit.ovirt.org/79203
> https://gerrit.ovirt.org/79215
> https://gerrit.ovirt.org/84813
> https://gerrit.ovirt.org/88201
> https://gerrit.ovirt.org/88234
>
> And current one - this and the next ones in the stack:
> https://gerrit.ovirt.org/88331


Many of those change the default installation and test the 'interesting'
combinations - which I think is great.
I'd be happy for a setup with custom '3rd party' certificate, Kerberos
auth., ovirtmgmt on a VLAN interface, etc.
Y.


>
> Best regards,
> --
> Didi
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 04-03-2018 ] [ 004_basic_sanity.disk_operations ]

2018-03-04 Thread Yaniv Kaul
On Sun, Mar 4, 2018 at 5:48 PM, Nir Soffer <nsof...@redhat.com> wrote:

> On Sun, Mar 4, 2018 at 5:31 PM Yaniv Kaul <yk...@redhat.com> wrote:
>
>> On Sun, Mar 4, 2018 at 5:18 PM, Daniel Belenky <dbele...@redhat.com>
>> wrote:
>>
>>> Hi,
>>>
>>> The following test failed OST: 004_basic_sanity.disk_operations.
>>>
>>> Link to suspected patch: https://gerrit.ovirt.org/c/88404/
>>> Link to the failed job: http://jenkins.ovirt.org/
>>> job/ovirt-4.2_change-queue-tester/1019/
>>> Link to all test logs:
>>>
>>>- engine
>>>
>>> <http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-004_basic_sanity.py/lago-basic-suite-4-2-engine>
>>>- host 0
>>>
>>> <http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-004_basic_sanity.py/lago-basic-suite-4-2-host-0/_var_log>
>>>- host 1
>>>
>>> <http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1019/artifact/exported-artifacts/basic-suit-4.2-el7/test_logs/basic-suite-4.2/post-004_basic_sanity.py/lago-basic-suite-4-2-host-1/_var_log>
>>>
>>> Error snippet from engine:
>>>
>>> 2018-03-04 09:50:14,823-05 ERROR 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (ForkJoinPool-1-worker-12) [] EVENT_ID: VM_DOWN_ERROR(119), VM vm2 is down 
>>> with error. Exit message: Lost connection with qemu process.
>>>
>>>
>>> Error snippet from host:
>>>
>>> Mar  4 09:56:27 lago-basic-suite-4-2-host-1 libvirtd: 2018-03-04 
>>> 14:56:27.831+: 1189: error : qemuDomainAgentAvailable:6010 : Guest 
>>> agent is not responding: QEMU guest agent is not connected
>>>
>>>
>> That's not surprising - there's no guest agent there.
>>
>
> There are 2 issues here:
> - we are testing without guest agent when this is the recommended
> configuration
>   (snapshots may not be consistent without guest agent)
>

We are still using Cirros. I need to get a CentOS with cloud-init uploaded
(WIP...)


> - vdsm should not report errors about guest agent since it does not know if
> guest agent is installed or not. This message should be an INFO message
> like
> "could not stop the vm using guest agent, falling back to ..."
>
> Generally we should not see ERROR or WARN message in OST. Any repeating
> error or warning should be reported as a bug.
>

There are several on storage... Should we file BZs on them?
Y.


>
> Nir
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] FC27 updates broken for ovirt-4.2

2018-02-26 Thread Yaniv Kaul
On Mon, Feb 26, 2018 at 7:17 PM, Viktor Mihajlovski <
mihaj...@linux.vnet.ibm.com> wrote:

> I just tried to update the ovirt packages on my FC27 host, but failed
> due to https://gerrit.ovirt.org/#/c/87628/
>
> vdsm now requires libvirt >= 3.10.0-132 but Fedora 27 has only 3.7.0-4
> the moment.
>
> It's generic Fedora 27, but since I run on s390, cross-posting to s390
> list.
>
> I guess there's good reason to require libvirt 3.10. Is there any chance
> that we can get libvirt updated for Fedora 27?
>

Perhaps use the virt-preview[1] repo for now?
Y.

[1] https://fedoraproject.org/wiki/Virtualization_Preview_Repository


>
> --
> Regards,
>  Viktor Mihajlovski
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Why do we need ovirt-engine-sdk-python on a host?

2018-02-26 Thread Yaniv Kaul
Installed hosted-engine on a host, and was surprised to find out
ovirt-engine-sdk-python.
Trying to remove it ends up in several deps that I don't understand:
cockpit-ovirt-dashboard
ovirt-engine-appliance
ovirt-host
ovirt-hosted-engine-setup

Is any of them using (still?) the v3 Python API?

TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] The importance of fixing failed build-artifacts jobs

2018-02-22 Thread Yaniv Kaul
I think there's a rush to add FC27 and S390 (unrelated?) to the build. If
either fail, right now, I don't think we should be too concerned with them.
In the very near future we should be, though.
Y.

On Thu, Feb 22, 2018 at 9:06 PM, Dafna Ron  wrote:

> Hi All,
>
> We have been seeing a large amount of changes that are not deployed into
> tested lately because of failed build-artifacts jobs so we decided that
> perhaps we need to explain the importance of fixing a failed
> build-artifacts job.
>
> If a change failed a build-artifacts job, no matter what platform/arch it
> failed in, the change will not be deployed to tested.
>
> Here is an example of a change that will not be added to tested:
>
> [image: Inline image 1]
>
> As you can see, only one of the build-artifacts jobs failed but since the
> project specify that it requires all of these arches/platforms, the change
> will not be added to tested until all of the jobs are fixed.
>
> So what can we do?
>
> 1. Add the code which builds-artifacts to 'check-patch' so you'll get a -1
> if a build failed (assuming you will not merge with -1 from CI).
> 2. post merge - look for emails on failed artifacts on your change (you
> will have to fix the job and then re-trigger the change)
> 3. you can see all current broken failed artifacts jobs in jenkins under
> 'unstable critical' view [1] and you will know if your project is being
> deployed.
> 4. Remove the broken OS from your project ( either from Jenkins or from
> your automation dir if you're using V2 ) - ask us for help! this should be
> an easy patch
> 5.Don't add new OS builds until you're absolutly sure they work ( you can
> add check-patch to keep testing it, but don't add build-artifacts until its
> stable ).
>
> Please contact myself or anyone else from the CI team for assistance or
> questions and we would be happy to help.
>
> [1] http://jenkins.ovirt.org/
>
> Thank you,
>
> Dafna
>
>
>
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (ovirt-engine-metrics) ] [ 22-02-2018 ] [ 003_00_metrics_bootstrap.metrics_and_log_collector ]

2018-02-22 Thread Yaniv Kaul
On Thu, Feb 22, 2018 at 2:46 PM, Dafna Ron  wrote:

> hi,
>
> We are failing test 003_00_metrics_bootstrap.metrics_and_log_collector
> for basic suite.
>
> *Link and headline of suspected patches: *
>
>
>
>
>
>
> *ansible: End playbook based on initial validations -
> https://gerrit.ovirt.org/#/c/88062/
> Link to
> Job:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5829/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/5829/artifacts
> (Relevant)
> error snippet from the log: *
>
> /var/tmp:
> drwxr-x--x. root abrt system_u:object_r:abrt_var_cache_t:s0 abrt
> -rw---. root root unconfined_u:object_r:user_tmp_t:s0 rpm-tmp.aLitM7
> -rw---. root root unconfined_u:object_r:user_tmp_t:s0 rpm-tmp.G2r7IM
> -rw---. root root unconfined_u:object_r:user_tmp_t:s0 rpm-tmp.kVymZE
> -rw---. root root unconfined_u:object_r:user_tmp_t:s0 rpm-tmp.uPDvvU
> drwx--. root root system_u:object_r:tmp_t:s0   
> systemd-private-cd49c74726d5463f8d6f6502380e5e12-chronyd.service-i1T5IE
> drwx--. root root system_u:object_r:tmp_t:s0   
> systemd-private-cd49c74726d5463f8d6f6502380e5e12-systemd-timedated.service-lhoUsS
>
> /var/tmp/abrt:
> -rw---. root root system_u:object_r:abrt_var_cache_t:s0 last-via-server
>
> /var/tmp/systemd-private-cd49c74726d5463f8d6f6502380e5e12-chronyd.service-i1T5IE:
> drwxrwxrwt. root root system_u:object_r:tmp_t:s0   tmp
>
> /var/tmp/systemd-private-cd49c74726d5463f8d6f6502380e5e12-chronyd.service-i1T5IE/tmp:
>
> /var/tmp/systemd-private-cd49c74726d5463f8d6f6502380e5e12-systemd-timedated.service-lhoUsS:
> drwxrwxrwt. root root system_u:object_r:tmp_t:s0   tmp
>
> /var/tmp/systemd-private-cd49c74726d5463f8d6f6502380e5e12-systemd-timedated.service-lhoUsS/tmp:
>
> /var/yp:
> )
> 2018-02-22 07:24:05::DEBUG::__main__::251::root:: STDERR(/bin/ls: cannot open 
> directory 
> /rhev/data-center/mnt/blockSD/6babba93-09c8-4846-9ccb-07728f72eecb/master/tasks/bd563276-5092-4d28-86c4-63aa6c0b4344.temp:
>  No such file or directory
> )
> 2018-02-22 07:24:05::ERROR::__main__::832::root:: Failed to collect logs 
> from: lago-basic-suite-master-host-0; /bin/ls: cannot open directory 
> /rhev/data-center/mnt/blockSD/6babba93-09c8-4846-9ccb-07728f72eecb/master/tasks/bd563276-5092-4d28-86c4-63aa6c0b4344.temp:
>  No such file or directory
>
>
Is that reproducible? It's a log collector bug anyway, but I assume it's a
race between some task (for example, downloading images from Glance) and
log collector collecting logs.
Can you open a bug on log collector?
TIA,
Y.


>
> **
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 08-02-2018 ] [ 002_bootstrap.get_host_numa_nodes+ 004_basic_sanity.vm_run]

2018-02-08 Thread Yaniv Kaul
On Thu, Feb 8, 2018 at 3:14 PM, Dafna Ron <d...@redhat.com> wrote:

> I don't recall seeing it fail before this time.
> We can re-trigger the patch and see if it passes a second time or fails on
> the same issue.
>
> 1. Sent a patch to disable the test for the time being.
2. I have some doubt, spending 3 seconds on Lago code, that it may not
advertise the topology of the L1 hosts with multiple sockets (= NUMA), in
certain conditions. Looking into it.
Can you share with me which host the test was running on, what is the CPU
exposed and so on?
(Specifically if it's using host pass-through, I think we are OK, If we are
using custom CPU type, not so sure).
Y.

On Thu, Feb 8, 2018 at 1:11 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>>
>>
>> On Thu, Feb 8, 2018 at 12:46 PM, Dafna Ron <d...@redhat.com> wrote:
>>
>>> Hi,
>>>
>>> We have a failure on 002_bootstrap.get_host_numa_nodes in basic suite
>>> and  004_basic_sanity.vm_run on upgrade from release suite.
>>>
>>
>> The numa test is a new test, so would actually suspect the test.
>> Did it pass earlier? I have some suspicion it fails from time to time...
>> Y.
>>
>>
>>>
>>> Eli, can you please take a look? it seems to be the same reason - host
>>> is down or not suitable
>>>
>>>
>>>
>>>
>>>
>>> *Link and headline of suspected patches: Link to Job:Link to all
>>> logs:(Relevant) error snippet from the log: *
>>> *basic suite *:
>>>
>>> 2018-02-08 00:14:23,852-05 DEBUG [org.ovirt.engine.core.dal.dbb
>>> roker.PostgresDbEngineDialect$PostgresSimpleJdbcCall] (ServerService
>>> Thread Pool -- 41) [] SqlCall for procedure [GetAllFromVdcOption] compiled
>>> 2018-02-08 00:14:23,864-05 WARN  
>>> [org.ovirt.engine.core.utils.ConfigUtilsBase]
>>> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
>>> 'ConfigDir'
>>> 2018-02-08 00:14:23,869-05 WARN  
>>> [org.ovirt.engine.core.utils.ConfigUtilsBase]
>>> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
>>> 'DbJustRestored'
>>> 2018-02-08 00:14:23,871-05 WARN  
>>> [org.ovirt.engine.core.utils.ConfigUtilsBase]
>>> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
>>> 'ConfigDir'
>>> 2018-02-08 00:14:23,881-05 WARN  
>>> [org.ovirt.engine.core.utils.ConfigUtilsBase]
>>> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
>>> 'DbJustRestored'
>>> 2018-02-08 00:14:23,882-05 WARN  
>>> [org.ovirt.engine.core.utils.ConfigUtilsBase]
>>> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
>>> 'ConfigDir'
>>> 2018-02-08 00:14:23,915-05 WARN  
>>> [org.ovirt.engine.core.utils.ConfigUtilsBase]
>>> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
>>> 'DbJustRestored'
>>> 2018-02-08 00:14:23,919-05 INFO  
>>> [org.ovirt.engine.core.utils.osinfo.OsInfoPreferencesLoader]
>>> (ServerService Thread Pool -- 41) [] Loading file
>>> '/etc/ovirt-engine/osinfo.conf.d/00-defaults.properties'
>>> 2018-02-08 00:14:23,971-05 DEBUG 
>>> [org.ovirt.engine.core.utils.OsRepositoryImpl]
>>> (ServerService Thread Pool -- 41) [] Osinfo Repository:
>>> backwardCompatibility
>>> OtherLinux=5
>>> Windows2008R2x64=17
>>> Windows2003=3
>>> Windows2003x64=10
>>> RHEL3x64=15
>>> Windows8x64=21
>>> Windows8=20
>>> Windows7=11
>>> Windows7x64=12
>>> Windows2008=4
>>> RHEL4=8
>>> RHEL6x64=19
>>> RHEL5x64=13
>>> Windows2012x64=23
>>> WindowsXP=1
>>> RHEL4x64=14
>>> Unassigned=0
>>> Windows2008x64=16
>>> RHEL6=18
>>> RHEL5=7
>>> Other=0
>>> REHL3=9
>>> emptyNode
>>> os.debian_7.derivedFrom
>>> value=ubuntu_12_04
>>> os.debian_7.id
>>> value=1300
>>> os.debian_7.name
>>> value=Debian 7
>>> os.freebsd.bus
>>> value=32
>>> os.freebsd.derivedFrom
>>> value=other
>>> os.freebsd.id
>>> value=1500
>>> os.freebsd.name
>>> value=FreeBSD 9.2
>>> os.freebsdx64.bus
>>> value=64
>>> os.free

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master (vdsm) ] [ 08-02-2018 ] [ 002_bootstrap.get_host_numa_nodes+ 004_basic_sanity.vm_run]

2018-02-08 Thread Yaniv Kaul
On Thu, Feb 8, 2018 at 12:46 PM, Dafna Ron  wrote:

> Hi,
>
> We have a failure on 002_bootstrap.get_host_numa_nodes in basic suite
> and  004_basic_sanity.vm_run on upgrade from release suite.
>

The numa test is a new test, so would actually suspect the test.
Did it pass earlier? I have some suspicion it fails from time to time...
Y.


>
> Eli, can you please take a look? it seems to be the same reason - host is
> down or not suitable
>
>
>
>
>
> *Link and headline of suspected patches: Link to Job:Link to all
> logs:(Relevant) error snippet from the log: *
> *basic suite *:
>
> 2018-02-08 00:14:23,852-05 DEBUG [org.ovirt.engine.core.dal.dbbroker.
> PostgresDbEngineDialect$PostgresSimpleJdbcCall] (ServerService Thread
> Pool -- 41) [] SqlCall for procedure [GetAllFromVdcOption] compiled
> 2018-02-08 00:14:23,864-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
> 'ConfigDir'
> 2018-02-08 00:14:23,869-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
> 'DbJustRestored'
> 2018-02-08 00:14:23,871-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
> 'ConfigDir'
> 2018-02-08 00:14:23,881-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
> 'DbJustRestored'
> 2018-02-08 00:14:23,882-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
> 'ConfigDir'
> 2018-02-08 00:14:23,915-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 41) [] Could not find enum value for option:
> 'DbJustRestored'
> 2018-02-08 00:14:23,919-05 INFO  
> [org.ovirt.engine.core.utils.osinfo.OsInfoPreferencesLoader]
> (ServerService Thread Pool -- 41) [] Loading file '/etc/ovirt-engine/osinfo.
> conf.d/00-defaults.properties'
> 2018-02-08 00:14:23,971-05 DEBUG 
> [org.ovirt.engine.core.utils.OsRepositoryImpl]
> (ServerService Thread Pool -- 41) [] Osinfo Repository:
> backwardCompatibility
> OtherLinux=5
> Windows2008R2x64=17
> Windows2003=3
> Windows2003x64=10
> RHEL3x64=15
> Windows8x64=21
> Windows8=20
> Windows7=11
> Windows7x64=12
> Windows2008=4
> RHEL4=8
> RHEL6x64=19
> RHEL5x64=13
> Windows2012x64=23
> WindowsXP=1
> RHEL4x64=14
> Unassigned=0
> Windows2008x64=16
> RHEL6=18
> RHEL5=7
> Other=0
> REHL3=9
> emptyNode
> os.debian_7.derivedFrom
> value=ubuntu_12_04
> os.debian_7.id
> value=1300
> os.debian_7.name
> value=Debian 7
> os.freebsd.bus
> value=32
> os.freebsd.derivedFrom
> value=other
> os.freebsd.id
> value=1500
> os.freebsd.name
> value=FreeBSD 9.2
> os.freebsdx64.bus
> value=64
> os.freebsdx64.derivedFrom
> :
>
> upgrade suite:
> 2018-02-08 00:12:07,451-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 51) [] Could not find enum value for option:
> 'DbJustRestored'
> 2018-02-08 00:12:07,451-05 WARN  [org.ovirt.engine.core.utils.ConfigUtilsBase]
> (ServerService Thread Pool -- 51) [] Could not find enum value for option:
> 'ConfigDir'
> 2018-02-08 00:12:07,452-05 INFO  
> [org.ovirt.engine.core.utils.osinfo.OsInfoPreferencesLoader]
> (ServerService Thread Pool -- 51) [] Loading file '/etc/ovirt-engine/osinfo.
> conf.d/00-defaults.properties'
> 2018-02-08 00:12:07,505-05 DEBUG 
> [org.ovirt.engine.core.utils.OsRepositoryImpl]
> (ServerService Thread Pool -- 51) [] Osinfo Repository:
> backwardCompatibility
> OtherLinux=5
> Windows2008R2x64=17
> Windows2003=3
> Windows2003x64=10
> RHEL3x64=15
> Windows8x64=21
> Windows8=20
> Windows7=11
> Windows7x64=12
> Windows2008=4
> RHEL4=8
> RHEL6x64=19
> RHEL5x64=13
> Windows2012x64=23
> WindowsXP=1
> RHEL4x64=14
> Unassigned=0
> Windows2008x64=16
> RHEL6=18
> RHEL5=7
> Other=0
> REHL3=9
> emptyNode
> os.debian_7.derivedFrom
> value=ubuntu_12_04
> os.debian_7.id
> value=1300
> os.debian_7.name
> value=Debian 7
> os.freebsd.bus
> value=32
> os.freebsd.derivedFrom
> value=other
> os.freebsd.id
> value=1500
> os.freebsd.name
> value=FreeBSD 9.2
> os.freebsdx64.bus
> value=64
> os.freebsdx64.derivedFrom
> value=freebsd
> os.freebsdx64.id
> value=1501
> os.freebsdx64.name
> value=FreeBSD 9.2 x64
> os.other.bus
> value=64
> os.other.cpu.hotplugSupport
> value=true
> os.other.cpu.hotunplugSupport
>  

Re: [ovirt-devel] [vdsm] stable branch ovirt-4.2 created

2018-02-08 Thread Yaniv Kaul
On Thu, Feb 8, 2018 at 9:17 AM, Dan Kenigsberg  wrote:

> We still do not have vdsm-4.30 in http://plain.resources.ovirt.
> org/pub/ovirt-master-snapshot/rpm/el7/noarch/ . Since the current version
> there does not support 4.3 cluster level, we're constantly getting
>
> (EE-ManagedThreadFactory-engineScheduled-Thread-2) [239bce23] EVENT_ID:
> VDS_CLUSTER_VERSION_NOT_SUPPORTED(154), Host lago-network-suite-master-host-0
> is compatible with versions (3.6,4.0,4.1,4.2) and cannot join Cluster
> Default which is set to version 4.3.
>
> assistance to straighten this up is most welcome.
>

Isn't a patch to explicitly set the cluster level to 4.2 will solve it?
Y.


>
> On Wed, Feb 7, 2018 at 2:06 PM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> 2018-02-07 12:58 GMT+01:00 Dan Kenigsberg :
>>
>>> On Wed, Feb 7, 2018 at 10:14 AM, Francesco Romani 
>>> wrote:
>>> > On 02/07/2018 08:46 AM, Dan Kenigsberg wrote:
>>> >> On Tue, Feb 6, 2018 at 10:28 PM, Francesco Romani 
>>> wrote:
>>> >>> Hi all,
>>> >>>
>>> >>>
>>> >>> With the help of Sandro (many thanks @sbonazzo !), we created minutes
>>> >>> ago the ovirt-4.2 stable branch:
>>> >>>
>>> >>>
>>> >>> Steps performed:
>>> >>>
>>> >>> 1. merged https://gerrit.ovirt.org/#/c/87070/
>>> >>>
>>> >>> 2. branched out ovirt-4.2 from git master
>>> >>>
>>> >>> 3. merged https://gerrit.ovirt.org/#/c/87181/ to add support for
>>> 4.3 level
>>> >>>
>>> >>> 4. createed and pushed the tag v4.30.0 from master, to make sure the
>>> >>> version number is greater of the stable versions, and to (somehow :))
>>> >>> align with oVirt versioning
>>> >>>
>>> >>> 5. tested make dist/make rpm on both new branch ovirt-4.2 and master,
>>> >>> both looks good and use the right version
>>> >>>
>>> >>>
>>> >>> Maintainers, please check it looks right for you before merging any
>>> new
>>> >>> patch to master branch.
>>> >>>
>>> >>>
>>> >>> Please let me know about any issue!
>>> >> Thank you Francesco (and Sandro).
>>> >>
>>> >> Any idea why http://plain.resources.ovirt.o
>>> rg/pub/ovirt-4.2-snapshot/rpm/el7/noarch/
>>> >> still does not hold any vdsm-4.20 , and
>>> >> http://plain.resources.ovirt.org/pub/ovirt-master-snapshot/r
>>> pm/el7/noarch/
>>> >> does not have the new vdsm-4.30 ?
>>> >>
>>> >> ?
>>> >
>>> > Uhm, maybe related to CQ (Change Queue), because git state looks ok,
>>> one
>>> > data point:
>>> > http://jenkins.ovirt.org/job/vdsm_master_build-artifacts-on-
>>> demand-el7-x86_64/772/artifact/exported-artifacts/
>>> > built from this patch https://gerrit.ovirt.org/#/c/87213/
>>> >
>>> > in turn based on top of current master
>>>
>>> Maybe Barak knows? Making GQ tick is the intention of the jenkins
>>> patch, isn't it?
>>>
>>
>> I think it may be caused by a failure trying to build on fcraw for s390x.
>> I removed the failing jobs until we fix the issue on jenkins side.
>>
>>
>>
>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>
>> Red Hat EMEA 
>> 
>> TRIED. TESTED. TRUSTED. 
>>
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST][HC] HE fails to deploy

2018-02-05 Thread Yaniv Kaul
On Feb 6, 2018 7:53 AM, "Sahina Bose"  wrote:



On Mon, Feb 5, 2018 at 2:59 PM, Sahina Bose  wrote:

> Hi all,
>
> I see the HE fails to deploy after task in running ansible playbook 
> create_target_vm :
>
> TASK [Wait for the engine to come up on the target VM]",
>
> with Error engine state=EngineUnexpectedlyDown
>
> Is this a known issue that you are working on?
>
>
This does seem like a race, because I see that the HC suite again failed
with the same error after a successful run yesterday.
Do I need to open a bug or do we have one tracking this already?


Please open a bug.
I kind of remember we've had some (infra?) issue where Engine timed out on
HE setup from time to time. Not sure it was solved.
Please attach server.log and engine.log and let's have a look.

Y.



> thanks!
>
> sahina
>
>
> Full HE setup log at 
> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/3875/artifact/exported-artifacts/hc-basic-suite-master__logs/test_logs/hc-basic-suite-master/post-002_bootstrap.py/lago-hc-basic-suite-master-host0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180205033809-ybwdxp.log
>
>

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Error while setting up disk in ovirt-engine.

2018-01-29 Thread Yaniv Kaul
On Jan 29, 2018 12:21 PM, "Avinash Dasoundhi"  wrote:

Hello Everyone

I have set up ovirt-engine for testing purpose and, I am using
ovirt-vdsmfake[1] to provision host and vms on that ovirt-engine. But I am
facing an error while creating a disk.


Depending on your use case, you might want to check ovirt-system-tests
instead.
Y.


I have provision 10 hosts, all are up with their own local storage. And
when I am trying to add a disk to the disk section, error popups like "*Error
while executing action Add Disk to VM: General Exception*". I saw the logs,
thought of some permission error, checked that but still this error is
coming.

Can somebody provide some details on it?

I am attaching the engine.log with this mail.

Thank You


-- 

AVINASH DASOUNDHI

ASSOCIATE SOFTWARE ENGINEER IN ENG PERF R, RHCSA

Red Hat BLR 

IRC - tenstormavi

adaso...@redhat.comM: +91-8653245552


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Problem when loading images via web interface

2018-01-23 Thread Yaniv Kaul
On Tue, Jan 23, 2018 at 10:39 PM, Dmitry Semenov  wrote:

> While loading disk image (via the web interface) in cluster01 on
> storage01, storage02 - everything is going well.
> While loading disk image (via the web interface) in cluster02 on
> storage03, storage04 - the problem occurs, the image isn't loaded, the
> process stops at the stage: paused by System (at the same time loading
> straightly through API goes without problems).
>
> screenshot: https://yadi.sk/i/9WtkDlT23Riqxp
>
> Logs are applied (engine.log): https://pastebin.com/54k5j7hC


Can you also share vdsm logs, at least from 01c04x09.unix.local ? It seems
to have failed there.
Y.


>
>
> image size: ~1.3 GB
>
> my scheme:
>
> data_center_01
>   cluster01
> host01  \
> host02  - storage01, storage02
> host03  /
>
>   cluster02
> host04  \
> host05  - storage03, storage04
> host06  /
>
> HostedEngine in cluster01
> oVirt: Version 4.2.0.2-1.el7.centos
>
> --
> Best regards,
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt hc master ] [ 19-01-2018 ] [ 002_bootstrap.add_hosts ]

2018-01-19 Thread Yaniv Kaul
On Fri, Jan 19, 2018 at 5:06 PM, Dafna Ron  wrote:

> Hi,
>
> we are failing hc master basic suite on test: 002_bootstrap.add_hosts
>
>
>
>
>
>
>
> *Link and headline of suspected patches: Link to
> Job:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/
> Link
> to all
> logs:http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/
> (Relevant)
> error snippet from the log: *2018-01-18 22:30:56,141-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [3e58f8ce] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
> error has occurred during installation of Host lago_basic_suite_hc_host0:
> Failed to execute stage 'Closing up': 'Plugin' object has no attribute
> 'exist'
>
>
> **
>

Dafna,
The relevant log is[1], which shows:

2018-01-18 22:49:25,385-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'start',
'glusterd.service') stdout:
2018-01-18 22:49:25,385-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start',
'glusterd.service') stderr:
2018-01-18 22:49:25,385-0500 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
  File "/tmp/ovirt-xJomKMYufQ/pythonlib/otopi/context.py", line 133, in
_executeMethod
method['method']()
  File
"/tmp/ovirt-xJomKMYufQ/otopi-plugins/ovirt-host-deploy/gluster/packages.py",
line 95, in _closeup
if self.services.exist('glustereventsd'):
AttributeError: 'Plugin' object has no attribute 'exist'


Y.

[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/163/artifact/exported-artifacts/test_logs/hc-basic-suite-master/post-002_bootstrap.py/lago-hc-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20180118224925-192.168.200.4-7bbdac84.log


>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST Regression in add cluster (IBRS related)

2018-01-12 Thread Yaniv Kaul
On Fri, Jan 12, 2018 at 6:49 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 12 Jan 2018, at 17:32, Yaniv Kaul <yk...@redhat.com> wrote:
>
>
>
> On Fri, Jan 12, 2018 at 1:05 PM, Michal Skrivanek <michal.skrivanek@
> redhat.com> wrote:
>
>>
>>
>> On 12 Jan 2018, at 08:32, Tomas Jelinek <tjeli...@redhat.com> wrote:
>>
>>
>>
>> On Fri, Jan 12, 2018 at 8:18 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>>
>>>
>>>
>>> On Fri, Jan 12, 2018 at 9:06 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>>>
>>>> See[1] - do we need to update Lago / Lago OST plugin?
>>>>
>>>
>>> Something like https://github.com/lago-project/lago-ost-plugin/pull/31 
>>> perhaps
>>> (not tested, don't have the HW).
>>>
>>
>> yes, seems like that should do the trick.
>>
>>
>> sure, though, that list is also difficult to maintain
>> e.g. IvyBridge is not an oVirt supported model, there’s no “Skylake” model
>>
>> Nadav, what’s the exact purpose of that list, and can it be eliminated
>> somehow?
>>
>
> It's to match, as possible, between the host CPU (which is passed to L1)
> so it'll match oVirt’s.
>
>
> getting it from "virsh capabilities" on the host would match it a bit
> better. It would be enough to just make the L1 host report (via fake caps
> hook if needed) the same model_X in getVdsCapabilities as the L0
>

That used to be my initial implementation. I don't recall why it was
changed.
Y.


>
> It's not that difficult to maintain. We add new CPUs once-twice a year…?
>
>
> yes, not often
>
> Y.
>
>
>>
>> Thanks,
>> michal
>>
>>
>>
>>
>>> Y.
>>>
>>>
>>>> Error Message
>>>>
>>>> Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
>>>> IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX
>>>>
>>>> Stacktrace
>>>>
>>>> Traceback (most recent call last):
>>>>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>>>> testMethod()
>>>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in 
>>>> runTest
>>>> self.test(*self.arg)
>>>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, 
>>>> in wrapped_test
>>>> test()
>>>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, 
>>>> in wrapper
>>>> return func(get_test_prefix(), *args, **kwargs)
>>>>   File 
>>>> "/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>>>  line 277, in add_cluster
>>>> add_cluster_4(prefix)
>>>>   File 
>>>> "/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>>>  line 305, in add_cluster_4
>>>> cpu_family = prefix.virt_env.get_ovirt_cpu_family()
>>>>   File "/usr/lib/python2.7/site-packages/ovirtlago/virt.py", line 151, in 
>>>> get_ovirt_cpu_family
>>>> ','.join(cpu_map[host.cpu_vendor].iterkeys())
>>>> RuntimeError: Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
>>>> IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX
>>>>
>>>>
>>>>
>>>> Y.
>>>>
>>>> [1] 
>>>> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/3498/testReport/junit/(root)/002_bootstrap/add_cluster/
>>>>
>>>>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST Regression in add cluster (IBRS related)

2018-01-12 Thread Yaniv Kaul
On Fri, Jan 12, 2018 at 1:05 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 12 Jan 2018, at 08:32, Tomas Jelinek <tjeli...@redhat.com> wrote:
>
>
>
> On Fri, Jan 12, 2018 at 8:18 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>>
>>
>> On Fri, Jan 12, 2018 at 9:06 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>>
>>> See[1] - do we need to update Lago / Lago OST plugin?
>>>
>>
>> Something like https://github.com/lago-project/lago-ost-plugin/pull/31
>> perhaps (not tested, don't have the HW).
>>
>
> yes, seems like that should do the trick.
>
>
> sure, though, that list is also difficult to maintain
> e.g. IvyBridge is not an oVirt supported model, there’s no “Skylake” model
>
> Nadav, what’s the exact purpose of that list, and can it be eliminated
> somehow?
>

It's to match, as possible, between the host CPU (which is passed to L1) so
it'll match oVirt's.
It's not that difficult to maintain. We add new CPUs once-twice a year...?
Y.


>
> Thanks,
> michal
>
>
>
>
>> Y.
>>
>>
>>> Error Message
>>>
>>> Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
>>> IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX
>>>
>>> Stacktrace
>>>
>>> Traceback (most recent call last):
>>>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
>>> testMethod()
>>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
>>> self.test(*self.arg)
>>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, 
>>> in wrapped_test
>>> test()
>>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
>>> wrapper
>>> return func(get_test_prefix(), *args, **kwargs)
>>>   File 
>>> "/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>>  line 277, in add_cluster
>>> add_cluster_4(prefix)
>>>   File 
>>> "/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>>>  line 305, in add_cluster_4
>>> cpu_family = prefix.virt_env.get_ovirt_cpu_family()
>>>   File "/usr/lib/python2.7/site-packages/ovirtlago/virt.py", line 151, in 
>>> get_ovirt_cpu_family
>>> ','.join(cpu_map[host.cpu_vendor].iterkeys())
>>> RuntimeError: Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
>>> IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX
>>>
>>>
>>>
>>> Y.
>>>
>>> [1] 
>>> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/3498/testReport/junit/(root)/002_bootstrap/add_cluster/
>>>
>>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST Regression in add cluster (IBRS related)

2018-01-11 Thread Yaniv Kaul
On Fri, Jan 12, 2018 at 9:06 AM, Yaniv Kaul <yk...@redhat.com> wrote:

> See[1] - do we need to update Lago / Lago OST plugin?
>

Something like https://github.com/lago-project/lago-ost-plugin/pull/31
perhaps (not tested, don't have the HW).
Y.


> Error Message
>
> Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
> IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX
>
> Stacktrace
>
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
> wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File 
> "/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>  line 277, in add_cluster
> add_cluster_4(prefix)
>   File 
> "/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
>  line 305, in add_cluster_4
> cpu_family = prefix.virt_env.get_ovirt_cpu_family()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/virt.py", line 151, in 
> get_ovirt_cpu_family
> ','.join(cpu_map[host.cpu_vendor].iterkeys())
> RuntimeError: Unsupported CPU model: Haswell-noTSX-IBRS. Supported models: 
> IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX
>
>
>
> Y.
>
> [1] 
> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/3498/testReport/junit/(root)/002_bootstrap/add_cluster/
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] OST Regression in add cluster (IBRS related)

2018-01-11 Thread Yaniv Kaul
See[1] - do we need to update Lago / Lago OST plugin?
Error Message

Unsupported CPU model: Haswell-noTSX-IBRS. Supported models:
IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX

Stacktrace

Traceback (most recent call last):
  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
129, in wrapped_test
test()
  File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
59, in wrapper
return func(get_test_prefix(), *args, **kwargs)
  File 
"/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 277, in add_cluster
add_cluster_4(prefix)
  File 
"/home/jenkins/workspace/ovirt-system-tests_master_check-patch-el7-x86_64/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
line 305, in add_cluster_4
cpu_family = prefix.virt_env.get_ovirt_cpu_family()
  File "/usr/lib/python2.7/site-packages/ovirtlago/virt.py", line 151,
in get_ovirt_cpu_family
','.join(cpu_map[host.cpu_vendor].iterkeys())
RuntimeError: Unsupported CPU model: Haswell-noTSX-IBRS. Supported
models: 
IvyBridge,Westmere,Skylake,Penryn,Haswell,Broadwell,Nehalem,Skylake-Client,Broadwell-noTSX,Conroe,SandyBridge,Haswell-noTSX



Y.

[1] 
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/3498/testReport/junit/(root)/002_bootstrap/add_cluster/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 24/12/2017 ] [use_ovn_provider]

2017-12-29 Thread Yaniv Kaul
On Fri, Dec 29, 2017 at 2:21 PM, Dan Kenigsberg <dan...@redhat.com> wrote:

> top posting is evil.
>
> On Fri, Dec 29, 2017 at 1:00 PM, Marcin Mirecki <mmire...@redhat.com>
> wrote:
> >
> > On Thu, Dec 28, 2017 at 11:48 PM, Yaniv Kaul <yk...@redhat.com> wrote:
> >>
> >>
> >>
> >> On Fri, Dec 29, 2017 at 12:26 AM, Barak Korren <bkor...@redhat.com>
> wrote:
> >>>
> >>> On 29 December 2017 at 00:22, Barak Korren <bkor...@redhat.com> wrote:
> >>> > On 28 December 2017 at 20:02, Dan Kenigsberg <dan...@redhat.com>
> wrote:
> >>> >> Yet
> >>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4559/
> >>> >> (which is the gating job for https://gerrit.ovirt.org/#/c/85797/2 )
> >>> >> still fails.
> >>> >> Could you look into why, Marcin?
> >>> >> The failure seems unrelated to ovn, as it is about a *host* loosing
> >>> >> connectivity. But it reproduces too much, so we need to get to the
> >>> >> bottom of it.
> >>> >>
> >>> >
> >>> > Re sending the change through the gate yielded a different error:
> >>> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4563/
> >>> >
> >>> > If this is still unrelated, we need to think seriously what is
> raising
> >>> > this large amount of unrelated failures. We cannot do any accurate
> >>> > reporting when failures are sporadic.
> >>> >
> >>>
> >>> And here is yet another host connectivity issue failing a test for a
> >>> change that should have no effect whatsoever (its a tox patch for
> >>> vdsm):
> >>>
> >>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4565/
> >>
> >>
> >> I've added a fair number of changes this week. I doubt they are related,
> >> but the one that stands out
> >> is the addition of a fence-agent to one of the hosts.
> >> https://gerrit.ovirt.org/#/c/85817/ disables this specific test, just
> in
> >> case.
> >>
> >> I don't think it causes an issue, but it's the only one looking at the
> git
> >> log I can suspect.
>
> > Trying to rebuild Barak's build resulted in another fail:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4571/
> > (with the same problem as Dan's build)
> >
> > Engine log contains a few of "IOException: Broken pipe"
> > which seem to correspond to a vdsm restart: "[vds] Exiting (vdsmd:170)"
> > yet looking at my local successful run, I see the same issues in the log.
> > I don't see any other obvious reasons for the problem so far.
>
>
> This actually points back to ykaul's fencing patch. And indeed,
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/4571/artifact/exported-artifacts/basic-suit-master-
> el7/test_logs/basic-suite-master/post-005_network_by_
> label.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log
> has
>
> 2017-12-29 05:26:07,712-05 DEBUG
> [org.ovirt.engine.core.uutils.ssh.SSHClient]
> (EE-ManagedThreadFactory-engine-Thread-417) [1a4f9963] Executed:
> '/usr/bin/vdsm-tool service-restart vdsmd'
>
> which means that Engine decided that it wants to kill vdsm. There are
> multiple communication errors prior to the soft fencing, but maybe
> waiting a bit longer would have kept the host alive.
>

Note that there's a test called vdsm recovery, where we actually stop and
start VDSM - perhaps it's there?
Anyway, disabled the test that adds fencing. I don't think this is the
cause, but let's see.
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 24/12/2017 ] [use_ovn_provider]

2017-12-28 Thread Yaniv Kaul
On Fri, Dec 29, 2017 at 12:26 AM, Barak Korren  wrote:

> On 29 December 2017 at 00:22, Barak Korren  wrote:
> > On 28 December 2017 at 20:02, Dan Kenigsberg  wrote:
> >> Yet http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4559/
> >> (which is the gating job for https://gerrit.ovirt.org/#/c/85797/2 )
> >> still fails.
> >> Could you look into why, Marcin?
> >> The failure seems unrelated to ovn, as it is about a *host* loosing
> >> connectivity. But it reproduces too much, so we need to get to the
> >> bottom of it.
> >>
> >
> > Re sending the change through the gate yielded a different error:
> > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4563/
> >
> > If this is still unrelated, we need to think seriously what is raising
> > this large amount of unrelated failures. We cannot do any accurate
> > reporting when failures are sporadic.
> >
>
> And here is yet another host connectivity issue failing a test for a
> change that should have no effect whatsoever (its a tox patch for
> vdsm):
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4565/


I've added a fair number of changes this week. I doubt they are related,
but the one that stands out
is the addition of a fence-agent to one of the hosts.
https://gerrit.ovirt.org/#/c/85817/ disables this specific test, just in
case.

I don't think it causes an issue, but it's the only one looking at the git
log I can suspect.
Y.


>
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] vdsm: AttributeError: 'NoneType' object has no attribute 'wrap_command'

2017-12-28 Thread Yaniv Kaul
Just saw this on[1]:
2017-12-28 10:09:24,980-0500 INFO  (MainThread) [vdsm.api] FINISH
prepareForShutdown return=None from=internal,
task_id=1a83a4d3-43e3-4833-883b-dfd2fe4d76be (api:52)
2017-12-28 10:09:24,980-0500 INFO  (MainThread) [vds] Stopping threads
(vdsmd:159)
2017-12-28 10:09:24,980-0500 INFO  (MainThread) [vds] Exiting (vdsmd:170)
2017-12-28 10:09:25,011-0500 INFO  (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] HSM_MailboxMonitor - Incoming mail
monitoring thread stopped, clearing outgoing mail (mailbox:511)
2017-12-28 10:09:25,011-0500 INFO  (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] HSM_MailMonitor sending mail to SPM -
['/usr/bin/dd',
'of=/rhev/data-center/9b97451a-0312-4c69-85e2-f86d6d273636/mastersd/dom_md/inbox',
'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=4096', 'count=1',
'seek=1'] (mailbox:387)
2017-12-28 10:09:25,012-0500 ERROR (mailbox-hsm)
[storage.MailBox.HsmMailMonitor] FINISH thread  failed (concurrent:201)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/common/concurrent.py", line
194, in run
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mailbox.py", line
514, in _run
self._sendMail()  # Clear outgoing mailbox
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mailbox.py", line
394, in _sendMail
_mboxExecCmd(self._outCmd, data=self._outgoingMail)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/mailbox.py", line 84,
in _mboxExecCmd
return misc.execCmd(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/commands.py", line 53,
in execCmd
command = cmdutils.wrap_command(command, with_ioclass=ioclass,
AttributeError: 'NoneType' object has no attribute 'wrap_command'
2017-12-28 10:09:28,965-0500 INFO  (MainThread) [vds] (PID: 23232) I am the
actual vdsm 4.20.9-84.git615770f.el7.centos lago-basic-suite-master-host-0
(3.10.0-693.2.2.el7.x86_64) (vdsmd:148)
2017-12-28 10:09:28,966-0500 INFO  (MainThread) [vds] VDSM will run with
cpu affinity: frozenset([1]) (vdsmd:254)
2017-12-28 10:09:28,970-0500 INFO  (MainThread) [storage.HSM] START HSM
init (hsm:366)


[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/3113/artifact/exported-artifacts/basic-suite-master__logs/test_logs/basic-suite-master/post-005_network_by_label.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 24/12/2017 ] [use_ovn_provider]

2017-12-28 Thread Yaniv Kaul
On Dec 27, 2017 12:35 PM, "Barak Korren"  wrote:

On 25 December 2017 at 14:14, Dan Kenigsberg  wrote:
> On Mon, Dec 25, 2017 at 2:09 PM, Dominik Holler 
wrote:
>> A helpful hint is in
>>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
tester/4492/artifact/exported-artifacts/basic-suit-master-
el7/test_logs/basic-suite-master/post-098_ovirt_provider_ovn.py/lago-basic-
suite-master-engine/_var_log/ovirt-engine/engine.log :
>> Caused by: org.jboss.resteasy.spi.ReaderException:
org.codehaus.jackson.map.JsonMappingException: Can not construct instance
of java.util.Calendar from String value '2017-12-27 13:19:51Z': not a valid
representation (error: Can not parse date "2017-12-27 13:19:51Z": not
compatible with any of standard forms ("-MM-dd'T'HH:mm:ss.SSSZ",
"-MM-dd'T'HH:mm:ss.SSS'Z'", "EEE, dd MMM  HH:mm:ss zzz",
"-MM-dd"))
>>  at [Source: org.jboss.resteasy.client.core.BaseClientResponse$
InputStreamWrapper@72c184c5; line: 1, column: 23] (through reference chain:
com.woorea.openstack.keystone.model.Access["token"]->com.
woorea.openstack.keystone.model.Token["expires"])
>>
>>
>> This problem was introduced by
>> https://gerrit.ovirt.org/#/c/85702/
>>
>> I created a fix:
>> https://gerrit.ovirt.org/85734
>
> Thanks for the quick fix.
>
> Is the new format accpetable to other users of the keystone-like API
> (such at the neutron cli)?

It seems the fix patch itself failed as well:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4539/

The failed test is: 006_migrations.prepare_migration_attachments_ipv6


Any updates on this failure?

Y.

It seems engine has lost the ability to talk to the host.

Logs are here:
http://jenkins.ovirt.org/job/ovirt-master_change-queue-
tester/4539/artifact/exported-artifacts/basic-suit-master-
el7/test_logs/basic-suite-master/post-006_migrations.py/

--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] User is not authorized to perform this action. - trying to add SSH Public Key for a user

2017-12-28 Thread Yaniv Kaul
I'm following the example[1] and getting an error:
User is not authorized to perform this action.

What permission do I need (I'm using admin@internal) in order to do this?
Engine.log:
2017-12-28 07:25:35,571-05 WARN
[org.ovirt.engine.core.bll.AddUserProfileCommand] (default task-13)
[5af96d04-f81d-4c76-9f06-ee03a12e2f71] Validation of action
'AddUserProfile' failed for user admin@internal-authz. Reasons:
VAR__ACTION__ADD,VAR__TYPE__USER_PROFILE,USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
2017-12-28 07:25:35,572-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-13) [5af96d04-f81d-4c76-9f06-ee03a12e2f71] method: runAction,
params: [AddUserProfile,
UserProfileParameters:{commandId='a57680f0-3926-4808-8452-ea8d67b076a3',
user='null', commandType='Unknown'}], timeElapsed: 41ms
2017-12-28 07:25:35,577-05 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-13) [] Operation Failed: [User is not authorized to perform this
action.]

TIA,
Y.

[1]
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/add_user_ssh_public_key.py
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Memory size (1024MB) cannot exceed maximum memory size (0MB).] - when trying to create an instance type

2017-12-26 Thread Yaniv Kaul
On Mon, Dec 25, 2017 at 8:29 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> > On 25 Dec 2017, at 18:41, Juan Hernández <jhern...@redhat.com> wrote:
> >
> > On 12/25/2017 05:46 PM, Yaniv Kaul wrote:
> >> While trying to add an instance type, I fail with the error:
> >> Operation Failed". Fault detail is "[Cannot add Template. Memory size
> >> (1024MB) cannot exceed maximum memory size (0MB).]
> >> The code is taken from the example in the SDK, so I'm not sure what I'm
> >> doing wrong here.
> >> Code:
> >> instance_types_service.add(
> >> types.InstanceType(
> >> name='myinstancetype',
> >> description='My instance type',
> >> memory=1 * 2**30,
> >> high_availability=types.HighAvailability(
> >> enabled=True,
> >> ),
> >> cpu=types.Cpu(
> >> topology=types.CpuTopology(
> >> cores=2,
> >> sockets=2,
> >> ),
> >> ),
> >> ),
> >> )
> >> engine.log:
> >> 2017-12-25 10:58:19,825-05 INFO
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-13)
> >> [265ff704-d89f-471b-8207-fc0e1b8816fd] Lock Acquired to object
> >> 'EngineLock:{exclusiveLocks='[myinstancetype=TEMPLATE_NAME,
> >> 703e1265-e160-4a76-82e6-06974156b7b9=TEMPLATE]', sharedLocks='[]'}'
> >> 2017-12-25 10:58:19,831-05 WARN
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-13)
> >> [265ff704-d89f-471b-8207-fc0e1b8816fd] Validation of action
> 'AddVmTemplate'
> >> failed for user admin@internal-authz. Reasons:
> >> VAR__ACTION__ADD,VAR__TYPE__VM_TEMPLATE,ACTION_TYPE_FAILED_
> MAX_MEMORY_CANNOT_BE_SMALLER_THAN_MEMORY_SIZE,$maxMemory
> >> 0,$memory 1024
> >> 2017-12-25 10:58:19,832-05 INFO
> >> [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-13)
> >> [265ff704-d89f-471b-8207-fc0e1b8816fd] Lock freed to object
> >> 'EngineLock:{exclusiveLocks='[myinstancetype=TEMPLATE_NAME,
> >> 703e1265-e160-4a76-82e6-06974156b7b9=TEMPLATE]', sharedLocks='[]'}'
> >> 2017-12-25 10:58:19,839-05 DEBUG
> >> [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
> >> (default task-13) [265ff704-d89f-471b-8207-fc0e1b8816fd] method:
> runAction,
> >> params: [AddVmTemplate,
> >> AddVmTemplateParameters:{commandId='179df9ed-209c-4882-a19a-
> b76a4fe1adb8',
> >> user='null', commandType='Unknown'}], timeElapsed: 33ms
> >> 2017-12-25 10:58:19,846-05 ERROR
> >> [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
> (default
> >> task-13) [] Operation Failed: [Cannot add Template. Memory size (1024MB)
> >> cannot exceed maximum memory size (0MB).]
> >> TIA,
> >> Y.
> >
> > I think this is related to the new `memory_policy.max` attribute that
> was introduced in 4.1. I think that for virtual machines it has a default
> value so that it isn't necessary to explicitly provide it. It may not have
> a default value fro instance types. Can you try adding this to the request
> to create the instance type?
> >
> >  memory_policy=types.MemoryPolicy(
> >max=1 * 2**30
> >  )
> >
> > Then try again. If it works I think that we need to fix the engine so
> that it assigns a default value, like it does for virtual machines.
>

Thanks, that worked.
We also need to fix the example in the SDK...
Y.


>
> The default for VMs comes from the Blank template. There’s no template for
> instance types, so it always needs to be provided
>
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Memory size (1024MB) cannot exceed maximum memory size (0MB).] - when trying to create an instance type

2017-12-25 Thread Yaniv Kaul
While trying to add an instance type, I fail with the error:
Operation Failed". Fault detail is "[Cannot add Template. Memory size
(1024MB) cannot exceed maximum memory size (0MB).]

The code is taken from the example in the SDK, so I'm not sure what I'm
doing wrong here.
Code:
instance_types_service.add(
types.InstanceType(
name='myinstancetype',
description='My instance type',
memory=1 * 2**30,
high_availability=types.HighAvailability(
enabled=True,
),
cpu=types.Cpu(
topology=types.CpuTopology(
cores=2,
sockets=2,
),
),
),
)


engine.log:
2017-12-25 10:58:19,825-05 INFO
[org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-13)
[265ff704-d89f-471b-8207-fc0e1b8816fd] Lock Acquired to object
'EngineLock:{exclusiveLocks='[myinstancetype=TEMPLATE_NAME,
703e1265-e160-4a76-82e6-06974156b7b9=TEMPLATE]', sharedLocks='[]'}'
2017-12-25 10:58:19,831-05 WARN
[org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-13)
[265ff704-d89f-471b-8207-fc0e1b8816fd] Validation of action 'AddVmTemplate'
failed for user admin@internal-authz. Reasons:
VAR__ACTION__ADD,VAR__TYPE__VM_TEMPLATE,ACTION_TYPE_FAILED_MAX_MEMORY_CANNOT_BE_SMALLER_THAN_MEMORY_SIZE,$maxMemory
0,$memory 1024
2017-12-25 10:58:19,832-05 INFO
[org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-13)
[265ff704-d89f-471b-8207-fc0e1b8816fd] Lock freed to object
'EngineLock:{exclusiveLocks='[myinstancetype=TEMPLATE_NAME,
703e1265-e160-4a76-82e6-06974156b7b9=TEMPLATE]', sharedLocks='[]'}'
2017-12-25 10:58:19,839-05 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-13) [265ff704-d89f-471b-8207-fc0e1b8816fd] method: runAction,
params: [AddVmTemplate,
AddVmTemplateParameters:{commandId='179df9ed-209c-4882-a19a-b76a4fe1adb8',
user='null', commandType='Unknown'}], timeElapsed: 33ms
2017-12-25 10:58:19,846-05 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-13) [] Operation Failed: [Cannot add Template. Memory size (1024MB)
cannot exceed maximum memory size (0MB).]


TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] upgrade_check and upgrade API calls (v4)

2017-12-25 Thread Yaniv Kaul
I'm trying to check if a certain host can be upgraded.
1. I'm calling it through the host_service, something like:

host_service = connection.system_service().hosts_service().host_service(
host.id)
is_upgrade = host_service.upgrade_check()

To my surprise, is_upgrade is None. I expected a Boolean.

2. In addition, when trying to upgrade via:
host_service.upgrade()

I'm getting Operation Failed - and it complains there are no upgrades
available.
Alas, in the UI it shows that upgrades available and upgrade does work
through the UI.

Am I misusing the functions?
(The host is a regular host, not ovirt-node btw).

TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt 4.2 host not compatible

2017-12-22 Thread Yaniv Kaul
On Dec 22, 2017 9:46 PM, "Paul Dyer"  wrote:

My setup is RHEL 7.4, with the host separate from the engine.


What do you mean separate?
Why didn't you upgrade it from the engine?


The ovirt-release42 rpm was added to the engine host, but not to the
virtualization host.   The vhost was still running v4.1 rpms.   I installed
ovirt-release42 on the vhost, then updated the rest of the rpms with "yum
update".I am still getting an error on activation of the vhost...

 Host parasol does not comply with the cluster Intel networks, the
following networks are missing on host: 'data30,data40,ovirtmgmt'

It seems like the networks bridges are not there anymore??


Perhaps. I assume restart of vdsm to ensure it is 4.2 and then syncing the
networks might be a good idea.
Y.


Paul



On Fri, Dec 22, 2017 at 12:46 PM, Paul Dyer  wrote:

> Hi,
>
> I have upgraded to ovirt 4.2 without issue.   But I cannot find a way to
> upgrade the host compatibility in the new OVirt Manager.
>
> I get this error when activiating the host...
>
> host parasol is compatible with versions (3.6,4.0,4.1) and cannot join
> Cluster Intel which is set to version 4.2.
>
> Thanks,
> Paul
>
>
> --
> Paul Dyer,
> Mercury Consulting Group, RHCE
> 504-302-8750 <(504)%20302-8750>
>



-- 
Paul Dyer,
Mercury Consulting Group, RHCE
504-302-8750 <(504)%20302-8750>

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Migration failed

2017-12-19 Thread Yaniv Kaul
On Tue, Dec 19, 2017 at 1:36 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> On 19 Dec 2017, at 10:14, Arik Hadas  wrote:
>
>
>
> On Tue, Dec 19, 2017 at 12:20 AM, Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>> > On 18 Dec 2017, at 13:21, Milan Zamazal  wrote:
>> >
>> > Yedidyah Bar David  writes:
>> >
>> >> On Mon, Dec 18, 2017 at 10:17 AM, Code Review 
>> wrote:
>> >>> Jenkins CI posted comments on this change.
>> >>>
>> >>
>> >>> View Change
>> >>>
>> >>> Patch set 3:Continuous-Integration -1
>> >>>
>> >>> Build Failed
>> >>>
>> >>> http://jenkins.ovirt.org/job/ovirt-system-tests_master_check
>> -patch-el7-x86_64/2882/
>> >>> : FAILURE
>> >>
>> >> Console output of above job says:
>> >>
>> >> 08:13:34   # migrate_vm:
>> >> 08:16:37 * Collect artifacts:
>> >> 08:16:40 * Collect artifacts: Success (in 0:00:03)
>> >> 08:16:40   # migrate_vm: Success (in 0:03:06)
>> >> 08:16:40   # Results located at
>> >> /dev/shm/ost/deployment-basic-suite-master/default/006_migra
>> tions.py.junit.xml
>> >> 08:16:40 @ Run test: 006_migrations.py: Success (in 0:03:50)
>> >> 08:16:40 Error occured, aborting
>> >>
>> >> The file 006_migrations.py.junit.xml [1] says:
>> >>
>> >> 
>> >
>> > Reading the logs, I can see the VM migrates normally and seems to be
>> > reported to Engine correctly.  When Engine receives end-of-migration
>> > event, it sends Destroy to the source (which is correct), calls dumpxmls
>> > on the destination in the meantime (looks fine to me) and then calls
>>
>> looks like a race between getallvmstats reporting VM as Down (statusTime:
>> 4296271980) being processed, while there is a Down/MigrationSucceeded event
>> arriving (with notify_time 4296273170) at about the same time
>> Unfortunately the vdsm.log is not in DEBUG level so there’s very little
>> information as to why and what exactly did it send out.
>> @infra - can you enable debug log level for vdsm by default?
>
>
>> It does look like a race to me - does it reproduce?
>
>
>> > Destroy on the destination, which is weird and I don't understand why
>> > the Destroy is invoked.
>> >
>> > Arik, would you like to take a look?  Maybe I overlooked something or
>> > maybe there's a bug.  The logs are at
>> > http://jenkins.ovirt.org/job/ovirt-system-tests_master_check
>> -patch-el7-x86_64/2882/artifact/exported-artifacts/basic-
>> suite-master__logs/test_logs/basic-suite-master/post-006_migrations.py/
>> > and the interesting things happen around 2017-12-18 03:13:43,758-05.
>>
>
> So it looks like that:
> 1. the engine polls the VMs from the source host
> 2. right after #1 we get the down event with proper exit reason (=
> migration succeeded) but the engine doesn't process it since the VM is
> being locked by the monitoring as part of processing that polling (to
> prevent two analysis of the same VM simultaneously).
> 3. the result of the polling is a VM in status Down and must probably
> exit_status=Normal
> 4. the engine decides to abort the migration and thus the monitoring
> thread of the source host destroys the VM on the destination host.
>
> Unfortunately we don't have the exit_reason that is returned by the
> polling.
> However, the only option I can think of is that it is different than
> MigrationSucceeded, because otherwise we would have hand-over the VM to the
> destination host rather than aborting the migration [1].
> That part of the code recently changed as part of [2] - we used to
> hand-over the VM when we get from the source host:
> status = Down + exit_status = Normal
> And in the database: previous_status = MigrationFrom
> But after that change we require:
> status = Down + exit_status = Normal ** + exit_reason = MigrationSucceeded
> **
> And in the database: previous_status = MigrationFrom
>
> Long story short, is it possible that VDSM had set the status of the VM to
> Down and exit_status to Normal but the exit_reason was not updated (yet?)
> to MigrationSucceeded?
>
>
> ok, so there might be a plausible explanation
> the guest drive mapping introduced a significant delay into the
> VM.getStats call since it tries to update the mapping when it detects a
> change. That is likely to happen on lifecycle changes. In the OST case it
> took 1.2s to finish the whole call, and in the meantime the migration has
> finished. The getStats() call is not written with possible state change in
> mind, so if it so happens and the state moves from anything to Down in the
> middle of it it returns a Down state without exitCode and exitReason which
> confuses engine. We started to use the exitReason code to differentiate the
> various flavors of Down in engine in ~4.1 and in this case it results in
> misleading “VM powered off by admin” case
>
> we need to fix the VM.getStats() to handle VM state changes in the middle
> we need to fix the guest drive mapping updates to handle cleanly
> situations when the VM is either not ready yet 

Re: [ovirt-devel] oVirt System Test configuration

2017-12-18 Thread Yaniv Kaul
On Mon, Dec 18, 2017 at 3:01 PM, Sandro Bonazzola <sbona...@redhat.com>
wrote:

>
>
> 2017-12-18 13:57 GMT+01:00 Eyal Edri <ee...@redhat.com>:
>
>>
>>
>> On Mon, Dec 18, 2017 at 2:53 PM, Sandro Bonazzola <sbona...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> 2017-12-18 12:42 GMT+01:00 Yaniv Kaul <yk...@redhat.com>:
>>>
>>>>
>>>>
>>>> On Mon, Dec 18, 2017 at 12:43 PM, Sandro Bonazzola <sbona...@redhat.com
>>>> > wrote:
>>>>
>>>>> Hi, I'd like to discuss what's being tested by oVirt System Test.
>>>>>
>>>>> I'm investigating on a sanlock issue that affects hosted engine hc
>>>>> suite.
>>>>> I installed a CentOS minimal VM and set repositories as in
>>>>> http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-
>>>>> suite-master/128/artifact/exported-artifacts/reposync-config.repo
>>>>>
>>>>> Upgrade from CentOS 1708 (7.4) minimal is:
>>>>>
>>>>> Aggiornamento:
>>>>>  bind-libs-lite   x86_64
>>>>> 32:9.9.4-51.el7_4.1
>>>>> centos-updates-el7  733 k
>>>>>  bind-license noarch
>>>>> 32:9.9.4-51.el7_4.1
>>>>> centos-updates-el7   84 k
>>>>>  nss  x86_64
>>>>> 3.28.4-15.el7_4
>>>>> centos-updates-el7  849 k
>>>>>  nss-softokn  x86_64
>>>>> 3.28.3-8.el7_4
>>>>>centos-updates-el7  310 k
>>>>>  nss-softokn-freebl   x86_64
>>>>> 3.28.3-8.el7_4
>>>>>centos-updates-el7  214 k
>>>>>  nss-sysinit  x86_64
>>>>> 3.28.4-15.el7_4
>>>>> centos-updates-el7   60 k
>>>>>  nss-toolsx86_64
>>>>> 3.28.4-15.el7_4
>>>>> centos-updates-el7  501 k
>>>>>  selinux-policy   noarch
>>>>> 3.13.1-166.el7_4.7
>>>>>centos-updates-el7  437 k
>>>>>  selinux-policy-targeted  noarch
>>>>> 3.13.1-166.el7_4.7
>>>>>centos-updates-el7  6.5 M
>>>>>  systemd  x86_64
>>>>> 219-42.el7_4.4
>>>>>centos-updates-el7  5.2 M
>>>>>  systemd-libs x86_64
>>>>> 219-42.el7_4.4
>>>>>centos-updates-el7  376 k
>>>>>  systemd-sysv x86_64
>>>>> 219-42.el7_4.4
>>>>>centos-updates-el7   70 k
>>>>>
>>>>> Enabling the CentOS repos:
>>>>>
>>>>>  grub2   x86_64
>>>>>  1:2.02-0.65.el7.centos.2
>>>>> updates 29 k
>>>>>  in sostituzione di grub2.x86_64 1:2.02-0.64.el7.centos
>>>>>  grub2-tools x86_64
>>>>>  1:2.02-0.65.el7.centos.2
>>>>> updates1.8 M
>>>>>  in sostituzione di grub2-tools.x86_64 1:2.02-0.64.el7.centos
>>>>>  grub2-tools-extra   x86_64
>>>>>  1:2.02-0.65.el7.centos.2
>>>>> updates  

Re: [ovirt-devel] oVirt System Test configuration

2017-12-18 Thread Yaniv Kaul
On Mon, Dec 18, 2017 at 12:43 PM, Sandro Bonazzola 
wrote:

> Hi, I'd like to discuss what's being tested by oVirt System Test.
>
> I'm investigating on a sanlock issue that affects hosted engine hc suite.
> I installed a CentOS minimal VM and set repositories as in
> http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/128/
> artifact/exported-artifacts/reposync-config.repo
>
> Upgrade from CentOS 1708 (7.4) minimal is:
>
> Aggiornamento:
>  bind-libs-lite   x86_64
> 32:9.9.4-51.el7_4.1
> centos-updates-el7  733 k
>  bind-license noarch
> 32:9.9.4-51.el7_4.1
> centos-updates-el7   84 k
>  nss  x86_64
> 3.28.4-15.el7_4
> centos-updates-el7  849 k
>  nss-softokn  x86_64
> 3.28.3-8.el7_4
>centos-updates-el7  310 k
>  nss-softokn-freebl   x86_64
> 3.28.3-8.el7_4
>centos-updates-el7  214 k
>  nss-sysinit  x86_64
> 3.28.4-15.el7_4
> centos-updates-el7   60 k
>  nss-toolsx86_64
> 3.28.4-15.el7_4
> centos-updates-el7  501 k
>  selinux-policy   noarch
> 3.13.1-166.el7_4.7
>centos-updates-el7  437 k
>  selinux-policy-targeted  noarch
> 3.13.1-166.el7_4.7
>centos-updates-el7  6.5 M
>  systemd  x86_64
> 219-42.el7_4.4
>centos-updates-el7  5.2 M
>  systemd-libs x86_64
> 219-42.el7_4.4
>centos-updates-el7  376 k
>  systemd-sysv x86_64
> 219-42.el7_4.4
>centos-updates-el7   70 k
>
> Enabling the CentOS repos:
>
>  grub2   x86_64
>  1:2.02-0.65.el7.centos.2
> updates 29 k
>  in sostituzione di grub2.x86_64 1:2.02-0.64.el7.centos
>  grub2-tools x86_64
>  1:2.02-0.65.el7.centos.2
> updates1.8 M
>  in sostituzione di grub2-tools.x86_64 1:2.02-0.64.el7.centos
>  grub2-tools-extra   x86_64
>  1:2.02-0.65.el7.centos.2
> updates993 k
>  in sostituzione di grub2-tools.x86_64 1:2.02-0.64.el7.centos
>  grub2-tools-minimal x86_64
>  1:2.02-0.65.el7.centos.2
> updates170 k
>  in sostituzione di grub2-tools.x86_64 1:2.02-0.64.el7.centos
>  kernel  x86_64
>  3.10.0-693.11.1.el7
>updates 43 M
> Aggiornamento:
>  NetworkManager  x86_64
>  1:1.8.0-11.el7_4
> updates1.6 M
>  NetworkManager-libnmx86_64
>  1:1.8.0-11.el7_4
> updates1.2 M
>  NetworkManager-team x86_64
>  1:1.8.0-11.el7_4
> updates156 k
>  NetworkManager-tui  x86_64
>  1:1.8.0-11.el7_4
> updates224 k
>  NetworkManager-wifi x86_64
>  1:1.8.0-11.el7_4
> updates184 k
>  bashx86_64
>  4.2.46-29.el7_4
>updates1.0 M
>  bind-libs-lite  x86_64
>  32:9.9.4-51.el7_4.1
>centos-updates-el7 733 k
>  bind-license

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Hosted-Engine ] [ 17-12-2017 ] [ post-002_bootstrap.py ]

2017-12-17 Thread Yaniv Kaul
On Sun, Dec 17, 2017 at 3:36 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Sun, Dec 17, 2017 at 3:12 PM, Eyal Edri <ee...@redhat.com> wrote:
>
>> I believe it correlates with the known bug(s)s we already know about
>> hosted-engine, I think Simone already opened a bug on it, but not sure.
>> Also, all the HE blocker bugs seem to be already on MODIFIED or ON_QA
>> [1], so not sure if its the same issue or not.
>>
>>
>> [1]
>> https://bugzilla.redhat.com/show_bug.cgi?id=1522641
>> https://bugzilla.redhat.com/show_bug.cgi?id=1512534
>>
>
> I thought it is actually https://bugzilla.redh
> at.com/show_bug.cgi?id=1502768
>

But now that I look at the logs, it is the issue with sanlock - reported in
https://bugzilla.redhat.com/show_bug.cgi?id=1525955
Y.


>
> Anyway, I'm more concerned that:
> 1. It is still failing (and this is the old method of provisioning Hosted
> Engine, which is supposed to be our fallback plan!)
> 2. We are not testing the new Ansible based flows in CI.
> Y.
>
>
>>
>>
>>
>>
>> On Sun, Dec 17, 2017 at 2:02 PM, Dafna Ron <d...@redhat.com> wrote:
>>
>>> Hi,
>>>
>>> We are failing OST for HE deployment.
>>>
>>> The failures started on 12/12 and latest failure is today, 17/12.
>>>
>>> *Link and headline of suspected patches: *
>>>
>>>
>>> * OST: avoid excluding ovirt-engine-appliance from base repo-
>>> https://gerrit.ovirt.org/#/c/85439/ <https://gerrit.ovirt.org/#/c/85439/>
>>> Link to Job:
>>> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/139/
>>> <http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/139/>*
>>>
>>>
>>> *Link to all logs:*
>>> *
>>> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/139/artifact/
>>> <http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/139/artifact/>*
>>>
>>>
>>> *(Relevant) error snippet from the log: *
>>>
>>>
>>> *  *
>>>
>>> 2017-12-16 22:29:49,179-0500 INFO otopi.plugins.gr_he_setup.vm.runvm 
>>> mixins._create_vm:151 Creating VM
>>> 2017-12-16 22:29:49,460-0500 DEBUG otopi.context context._executeMethod:143 
>>> method exception
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
>>> _executeMethod
>>> method['method']()
>>>   File 
>>> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
>>>  line 174, in _boot_from_hd
>>> self._create_vm()
>>>   File 
>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py", 
>>> line 303, in _create_vm
>>> raise RuntimeError(str(e))
>>> RuntimeError: Command VM.getStats with args {'vmID': 
>>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5'} failed:
>>> (code=100, message=General Exception: ("VM 
>>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5' was not defined yet or was 
>>> undefined",))
>>> 2017-12-16 22:29:49,512-0500 ERROR otopi.context context._executeMethod:152 
>>> Failed to execute stage 'Closing up': Command VM.getStats with args 
>>> {'vmID': '9b8a91ea-d5aa-44e6-8890-684173be7ef5'} failed:
>>> (code=100, message=General Exception: ("VM 
>>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5' was not defined yet or was 
>>> undefined",))
>>> 2017-12-16 22:29:49,513-0500 DEBUG otopi.plugins.otopi.dialog.human 
>>> human.format:69 newline sent to logger
>>> 2017-12-16 22:29:49,514-0500 DEBUG otopi.context 
>>> context.dumpEnvironment:821 ENVIRONMENT DUMP - BEGIN
>>> 2017-12-16 22:29:49,514-0500 DEBUG otopi.context 
>>> context.dumpEnvironment:831 ENV BASE/error=bool:'True'
>>> 2017-12-16 22:29:49,514-0500 DEBUG otopi.context 
>>> context.dumpEnvironment:831 ENV BASE/exceptionInfo=list:'[(>> 'exceptions.RuntimeError'>, RuntimeError('Command VM.getStats with args 
>>> {\'vmID\': \'9b8a91ea-d5aa-44e6-8890-684173be7ef5\'} failed:\n(code=100, 
>>> message=General Exception: ("VM \'9b8a91ea-d5aa-44e6-8890-684173be7ef5\' 
>>> was not defined yet or was undefined",))',), >> 0x40ba5a8>)]'
>>> 2017-12-16 22:29:49,515-0500 DEBUG otopi.context 
>>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/subst=dict:'{'@VIDEO_DEVICE@': 
>>> 'vga', '@SD_UUID@': 'c3c3e51a-b086-4bc1-97af-89109971b819',

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Hosted-Engine ] [ 17-12-2017 ] [ post-002_bootstrap.py ]

2017-12-17 Thread Yaniv Kaul
On Sun, Dec 17, 2017 at 3:12 PM, Eyal Edri  wrote:

> I believe it correlates with the known bug(s)s we already know about
> hosted-engine, I think Simone already opened a bug on it, but not sure.
> Also, all the HE blocker bugs seem to be already on MODIFIED or ON_QA [1],
> so not sure if its the same issue or not.
>
>
> [1]
> https://bugzilla.redhat.com/show_bug.cgi?id=1522641
> https://bugzilla.redhat.com/show_bug.cgi?id=1512534
>

I thought it is actually https://bugzilla.redhat.com/show_bug.cgi?id=1502768

Anyway, I'm more concerned that:
1. It is still failing (and this is the old method of provisioning Hosted
Engine, which is supposed to be our fallback plan!)
2. We are not testing the new Ansible based flows in CI.
Y.


>
>
>
>
> On Sun, Dec 17, 2017 at 2:02 PM, Dafna Ron  wrote:
>
>> Hi,
>>
>> We are failing OST for HE deployment.
>>
>> The failures started on 12/12 and latest failure is today, 17/12.
>>
>> *Link and headline of suspected patches: *
>>
>>
>> * OST: avoid excluding ovirt-engine-appliance from base repo-
>> https://gerrit.ovirt.org/#/c/85439/ 
>> Link to Job:
>> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/139/
>> *
>>
>>
>> *Link to all logs:*
>> *
>> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/139/artifact/
>> *
>>
>>
>> *(Relevant) error snippet from the log: *
>>
>>
>> *  *
>>
>> 2017-12-16 22:29:49,179-0500 INFO otopi.plugins.gr_he_setup.vm.runvm 
>> mixins._create_vm:151 Creating VM
>> 2017-12-16 22:29:49,460-0500 DEBUG otopi.context context._executeMethod:143 
>> method exception
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
>> _executeMethod
>> method['method']()
>>   File 
>> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/vm/runvm.py",
>>  line 174, in _boot_from_hd
>> self._create_vm()
>>   File 
>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/mixins.py", line 
>> 303, in _create_vm
>> raise RuntimeError(str(e))
>> RuntimeError: Command VM.getStats with args {'vmID': 
>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5'} failed:
>> (code=100, message=General Exception: ("VM 
>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5' was not defined yet or was 
>> undefined",))
>> 2017-12-16 22:29:49,512-0500 ERROR otopi.context context._executeMethod:152 
>> Failed to execute stage 'Closing up': Command VM.getStats with args {'vmID': 
>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5'} failed:
>> (code=100, message=General Exception: ("VM 
>> '9b8a91ea-d5aa-44e6-8890-684173be7ef5' was not defined yet or was 
>> undefined",))
>> 2017-12-16 22:29:49,513-0500 DEBUG otopi.plugins.otopi.dialog.human 
>> human.format:69 newline sent to logger
>> 2017-12-16 22:29:49,514-0500 DEBUG otopi.context context.dumpEnvironment:821 
>> ENVIRONMENT DUMP - BEGIN
>> 2017-12-16 22:29:49,514-0500 DEBUG otopi.context context.dumpEnvironment:831 
>> ENV BASE/error=bool:'True'
>> 2017-12-16 22:29:49,514-0500 DEBUG otopi.context context.dumpEnvironment:831 
>> ENV BASE/exceptionInfo=list:'[(, 
>> RuntimeError('Command VM.getStats with args {\'vmID\': 
>> \'9b8a91ea-d5aa-44e6-8890-684173be7ef5\'} failed:\n(code=100, 
>> message=General Exception: ("VM \'9b8a91ea-d5aa-44e6-8890-684173be7ef5\' was 
>> not defined yet or was undefined",))',), )]'
>> 2017-12-16 22:29:49,515-0500 DEBUG otopi.context context.dumpEnvironment:831 
>> ENV OVEHOSTED_VM/subst=dict:'{'@VIDEO_DEVICE@': 'vga', '@SD_UUID@': 
>> 'c3c3e51a-b086-4bc1-97af-89109971b819', '@CONSOLE_UUID@': 
>> 'b11ce8af-9633-4680-beba-9f2d4e48102f', '@NAME@': 'HostedEngine', 
>> '@BRIDGE@': 'ovirtmgmt', '@CDROM_UUID@': 
>> '856afaa0-dc01-403b-9b92-457b6cec081f', '@MEM_SIZE@': 3171, '@NIC_UUID@': 
>> '8a6fa9b0-4f60-42e2-8cc8-bb45bd124ffd', '@VCPUS@': '2', '@CPU_TYPE@': 
>> 'SandyBridge', '@VM_UUID@': '9b8a91ea-d5aa-44e6-8890-684173be7ef5', 
>> '@CDROM@': '/tmp/tmpwDR4Ng/seed.iso', '@EMULATED_MACHINE@': 'pc', 
>> '@GRAPHICS_DEVICE@': 'vnc', '@MAXVCPUS@': u'2', '@IMG_UUID@': 
>> '9bae6dab-8697-4bf0-a6da-c4c07d66c57d', '@CONSOLE_TYPE@': 'vnc', 
>> '@MAC_ADDR@': '54:52:c0:a8:c8:63', '@SP_UUID@': 
>> '----', '@VOL_UUID@': 
>> '8a639d80-96eb-47d2-a922-f0ae847137cc'}'
>> 2017-12-16 22:29:49,517-0500 DEBUG otopi.context context.dumpEnvironment:835 
>> ENVIRONMENT DUMP - END
>> 2017-12-16 22:29:49,518-0500 INFO otopi.context context.runSequence:735 
>> Stage: Clean up
>>
>> **
>>
>>
>> ___
>> Infra mailing list
>> in...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 12-12-2017 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2017-12-12 Thread Yaniv Kaul
On Tue, Dec 12, 2017 at 10:11 PM, Yaniv Kaul <yk...@redhat.com> wrote:

> I suspect you've missed the relevant exception:
> {"jsonrpc": "2.0", "id": "05aaf760-b3b7-47b5-8bd7-034e899b7400", "error":
> {"message": "General Exception: (\"'portMirroring'\",)", "code": 100}}�
> 2017-12-12 13:27:48,142-05 DEBUG 
> [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker]
> (ResponseWorker) [] Message received: {"jsonrpc": "2.0", "id":
> "05aaf760-b3b7-47b5-8bd7-034e899b7400", "error": {"message": "General
> Exception: (\"'portMirroring'\",)", "code": 100}}
> 2017-12-12 13:27:48,142-05 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default task-12) [4c4b50dc]
> Failed in 'HotUnplugNicVDS' method
> 2017-12-12 13:27:48,150-05 ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-12) [4c4b50dc]
> EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM
> lago-basic-suite-master-host-0 command HotUnplugNicVDS failed: General
> Exception: ("'portMirroring'",)
> 2017-12-12 13:27:48,150-05 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default task-12) [4c4b50dc]
> Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand'
> return value 'StatusOnlyReturn [status=Status [code=100, message=General
> Exception: ("'portMirroring'",)]]'
> 2017-12-12 13:27:48,150-05 INFO  [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default task-12) [4c4b50dc]
> HostName = lago-basic-suite-master-host-0
> 2017-12-12 13:27:48,150-05 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default task-12) [4c4b50dc]
> Command 'HotUnplugNicVDSCommand(HostName = lago-basic-suite-master-host-0,
> VmNicDeviceVDSParameters:{hostId='cf1d46b2-4594-4076-9d24-6f75220f0c97',
> vm.vm_name='vm0', nic='VmNic:{id='4487559e-7e47-4ab7-a6fe-1ae32fb34336',
> vnicProfileId='623c894e-234c-4f1b-899e-275be6864f75', speed='1000',
> type='3', macAddress='00:1a:4a:16:01:03', linked='true',
> vmId='90124564-04f7-4607-a222-503c42262844', vmTemplateId='null'}',
> vmDevice='VmDevice:{id='VmDeviceId:{deviceId='4487559e-7e47-4ab7-a6fe-1ae32fb34336',
> vmId='90124564-04f7-4607-a222-503c42262844'}', device='bridge',
> type='INTERFACE', specParams='[inbound={}, outbound={}]',
> address='{slot=0x0a, bus=0x00, domain=0x, type=pci, function=0x0}',
> managed='true', plugged='true', readOnly='false', deviceAlias='net2',
> customProperties='[]', snapshotId='null', logicalName='null',
> hostDevice='null'}'})' execution failed: VDSGenericException:
> VDSErrorException: Failed to HotUnplugNicVDS, error = General Exception:
> ("'portMirroring'",), code = 100
> 2017-12-12 13:27:48,150-05 DEBUG [org.ovirt.engine.core.
> vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default task-12) [4c4b50dc]
> Exception: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to HotUnplugNicVDS, error =
> General Exception: ("'portMirroring'",), code = 100
> at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.
> createDefaultConcreteException(VdsBrokerCommand.java:81) [vdsbroker.jar:]
> at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.
> createException(BrokerCommandBase.java:224) [vdsbroker.jar:]
> at org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.
> proceedProxyReturnValue(BrokerCommandBase.java:194) [vdsbroker.jar:]
> at org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand.
> executeVdsBrokerCommand(HotUnplugNicVDSCommand.java:14) [vdsbroker.jar:]
> at org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.
> executeVDSCommand(VdsBrokerCommand.java:112) [vdsbroker.jar:]
> at org.ovirt.engine.core.vdsbroker.VDSCommandBase.
> executeCommand(VDSCommandBase.java:73) [vdsbroker.jar:]
> ...
>
>
> (I did not look into the VDSM log to see what failed there)
>

And now that I did[1]:
2017-12-12 13:27:48,137-0500 INFO  (jsonrpc/0) [api.virt] START
hotunplugNic(params={u'nic': {u'nicModel': u'pv', u'macAddr':
u'00:1a:4a:16:01:03', u'linkActive': u'true', u'network': u'network_1',
u'filterParameters': [], u'filter': u'vdsm-no-mac-spoofing', u'specParams':
{u'inbound': {}, u'outbound': {}}, u'deviceId':
u'4487559e-7e47-4ab7-a6fe-1ae32fb34336', u'address': {u'function': u'0x0',
u'bus': u'0x00', u'domain': u'0x', u'type': u'pci', u'slot': u'0x0a'},
u'device': u'bridge', u'type': u'interface'}, u'vmId':
u'90124564-04f7-4607-a222-503c42262844'}) from=:::192.168.202.4,47014,
flow_id=4c4b50dc (api:46)
2017-12-12 13:27:48,137-0500 ERROR (json

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 12-12-2017 ] [ 098_ovirt_provider_ovn.use_ovn_provider ]

2017-12-12 Thread Yaniv Kaul
I suspect you've missed the relevant exception:
{"jsonrpc": "2.0", "id": "05aaf760-b3b7-47b5-8bd7-034e899b7400", "error":
{"message": "General Exception: (\"'portMirroring'\",)", "code": 100}}�
2017-12-12 13:27:48,142-05 DEBUG
[org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) []
Message received: {"jsonrpc": "2.0", "id":
"05aaf760-b3b7-47b5-8bd7-034e899b7400", "error": {"message": "General
Exception: (\"'portMirroring'\",)", "code": 100}}
2017-12-12 13:27:48,142-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default
task-12) [4c4b50dc] Failed in 'HotUnplugNicVDS' method
2017-12-12 13:27:48,150-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-12) [4c4b50dc] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802),
VDSM lago-basic-suite-master-host-0 command HotUnplugNicVDS failed: General
Exception: ("'portMirroring'",)
2017-12-12 13:27:48,150-05 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default
task-12) [4c4b50dc] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand' return
value 'StatusOnlyReturn [status=Status [code=100, message=General
Exception: ("'portMirroring'",)]]'
2017-12-12 13:27:48,150-05 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default
task-12) [4c4b50dc] HostName = lago-basic-suite-master-host-0
2017-12-12 13:27:48,150-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default
task-12) [4c4b50dc] Command 'HotUnplugNicVDSCommand(HostName =
lago-basic-suite-master-host-0,
VmNicDeviceVDSParameters:{hostId='cf1d46b2-4594-4076-9d24-6f75220f0c97',
vm.vm_name='vm0', nic='VmNic:{id='4487559e-7e47-4ab7-a6fe-1ae32fb34336',
vnicProfileId='623c894e-234c-4f1b-899e-275be6864f75', speed='1000',
type='3', macAddress='00:1a:4a:16:01:03', linked='true',
vmId='90124564-04f7-4607-a222-503c42262844', vmTemplateId='null'}',
vmDevice='VmDevice:{id='VmDeviceId:{deviceId='4487559e-7e47-4ab7-a6fe-1ae32fb34336',
vmId='90124564-04f7-4607-a222-503c42262844'}', device='bridge',
type='INTERFACE', specParams='[inbound={}, outbound={}]',
address='{slot=0x0a, bus=0x00, domain=0x, type=pci, function=0x0}',
managed='true', plugged='true', readOnly='false', deviceAlias='net2',
customProperties='[]', snapshotId='null', logicalName='null',
hostDevice='null'}'})' execution failed: VDSGenericException:
VDSErrorException: Failed to HotUnplugNicVDS, error = General Exception:
("'portMirroring'",), code = 100
2017-12-12 13:27:48,150-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand] (default
task-12) [4c4b50dc] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to HotUnplugNicVDS, error =
General Exception: ("'portMirroring'",), code = 100
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:224)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:194)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.HotUnplugNicVDSCommand.executeVdsBrokerCommand(HotUnplugNicVDSCommand.java:14)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73)
[vdsbroker.jar:]
...


(I did not look into the VDSM log to see what failed there)
Y.


On Tue, Dec 12, 2017 at 9:51 PM, Dafna Ron  wrote:

> Hi,
>
> We have a failure on test 098_ovirt_provider_ovn.use_ovn_provider but I
> am not sure how the reported patch is related to the failure.
>
> *Link and headline of suspected patches: *
> * https://gerrit.ovirt.org/#/c/85147/
>  - virt: Fix flipped condition in
> hotunplugNic*
>
> *Link to Job:*
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4371
> *
>
>
> *Link to all logs:*
> *
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4371/artifact
> *
>
>
> *(Relevant) error snippet from the log: *
>
>
>
> *  from junit: *
>
>
> File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
> wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirtMaster ] [ 07-11-2017 ] [007_sd_reattach.deactivate_storage_domain ]

2017-12-08 Thread Yaniv Kaul
On Fri, Dec 8, 2017 at 10:39 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Fri, Dec 8, 2017 at 9:31 PM, Dafna Ron <d...@redhat.com> wrote:
>
>> I opened a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1523813
>>
>
> I'm optimistically hoping https://gerrit.ovirt.org/#/c/85195/ will fix it.
> Not sure.
>

Keeps failing with:
Operation Failed: [Cannot deactivate Storage. The relevant Storage Domain's
status is Maintenance.]

Which is strange:
1. I do check if the SD is in Maint. mode before trying to deactive and the
test is supposed to be skipped if it is.
2. Why can't I deactive a SD when in Maint. mode?

Probably missing something here.
Y.

Y.
>
>
>>
>> Allon, can you please assign someone to help fix this test?
>> Please let me know if you need help from me.
>>
>> Thanks!
>> Dafna
>>
>>
>>
>> On 12/07/2017 11:59 AM, Yaniv Kaul wrote:
>>
>>
>>
>> On Thu, Dec 7, 2017 at 1:30 PM, Eyal Shenitzky <eshen...@redhat.com>
>> wrote:
>>
>>> I think that the maybe the QE can share their methods on how to avoid
>>> those issues.
>>> From what I remember, before deactivating storage domain they make sure
>>> that there are no running tasks related to
>>> the storage domain.
>>>
>>
>> Looks like an easy fix is to wrap it with try, except sdk4.Error and let
>> it sit within the testlib.assert_true_within_short() loop.
>> Y.
>>
>>
>>>
>>> On Thu, Dec 7, 2017 at 1:22 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Thu, Dec 7, 2017 at 1:12 PM, Dafna Ron <d...@redhat.com> wrote:
>>>>
>>>>> Maor, I either need to get new glasses or a magnifier glass to read
>>>>> what you wrote :-P
>>>>> when you say running tasks - these are actually running tasks that may
>>>>> be running because of other tests in ost - correct? wouldn't killing or
>>>>> blocking those can cause other tests to fail?
>>>>>
>>>>
>>>> It might well be the OVF update. How can we, from the API, wait for
>>>> those tasks to complete? Or should we catch exception and retry?
>>>> Y.
>>>>
>>>>
>>>>>
>>>>> On 12/07/2017 11:06 AM, Maor Lipchuk wrote:
>>>>>
>>>>> CANNOT_DEACTIVATE_DOMAIN_WITH_TASKS is a known issue, the problem is
>>>>> that we might have tasks which will start running internally using
>>>>> scheduling (like OVF_UPDATE) and we can't really know how much time every
>>>>> task will take until it will end.
>>>>>
>>>>> Even if we check that there are no running tasks it will not guarantee
>>>>> that no task will start until you deactivate the storage domain.
>>>>>
>>>>> I think that the best solution for it is an engine support to cancel
>>>>> running tasks or block tasks from running.
>>>>>
>>>>> On Thu, Dec 7, 2017 at 12:14 PM, Dafna Ron <d...@redhat.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We had a failure on master basic suite for test
>>>>>> 007_sd_reattach.deactivate_storage_domain.
>>>>>>
>>>>>> The failure was that we failed to deactivate domain due to running
>>>>>> tasks.
>>>>>>
>>>>>> It does not seem to be related to the patch it was testing and I
>>>>>> think that the test itself needs to be modified to check there are no
>>>>>> running tasks.
>>>>>>
>>>>>> Is there perhaps a way to query if there are running tasks before
>>>>>> running the command? can you please take a look at the test on OST?
>>>>>>
>>>>>> *Link and headline of suspected patches: Not related to error*
>>>>>>
>>>>>> *Link to Job:
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/
>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/>*
>>>>>>
>>>>>>
>>>>>> * Link to all logs:
>>>>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/artifact/
>>>>>> <http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4319/artifact/>
>>>>>> (Relevant) error snippet from the log:  2017-12-06 20:

Re: [ovirt-devel] Testing

2017-12-07 Thread Yaniv Kaul
On Thu, Dec 7, 2017 at 8:38 AM, Bernhard Seidl 
wrote:

> Hi,
>
> I can put some effort in testing. In the last few days I used to test
> master branch snapshots and reported bugs. Are you ok with that, or
> should I focus on a different branch?
>

Any branch is good for testing. Note that we've just released 4.2.0 1st
release candidate, so that's an important branch to test as well.
Thanks for your contribution!
Y.


>
> Bernhard
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.2.0 blockers review - Day 3

2017-11-30 Thread Yaniv Kaul
On Thu, Nov 30, 2017 at 11:38 AM, Arik Hadas  wrote:

>
>
> On Thu, Nov 30, 2017 at 11:10 AM, Martin Sivak  wrote:
>
>>

 Bug ID Product Assignee Status Summary
 1518887 
 ovirt-hosted-engine-ha b...@ovirt.org NEW ovirt-ha-agent fails parsing
 the OVF_STORE due to a change in OVF namespace URI

>>>
>>> I'm in favor of reverting the virt change personally.
>>>
>>
>> Unless something else depends on it, the commit message said vdsm needs
>> this.
>>
>
> vdsm needs it because we would like vdsm to be able to parse OVA files
> that are generated by us and OVA files that  are generated by others (and
> are VMware-compatbile) with the existing code in vdsm. The existing code is
> tailored to VMware-compatible OVA files that are generated by others, in
> which the uri doesn't include that slash at the end.
>
> It would be best to adjust ovirt-ha-agent to parse the right uri.
> However, it that's too complicated, an alternative solution is to keep
> writing the previous uri to ovirt's OVFs, i.e., those in OVF_STORE and in
> snapshot's configuration. That would be a pitty since we want to minimize
> the differences between the OVFs we generate, but it would be better than
> reverting the change..
>

I think a reasonable alternative is to revert the change, adjust HE to use
both formats, and then introduce the new format. The first step (revert)
should be done for 4.2.0 (today?) and the rest for 4.2.1.
Y.


>
>>
>>
>>>
>>> 1516113 
 cockpit-ovirt phbai...@redhat.com POST Deploy the HostedEngine failed
 with the default CPU type

>>>
>>> Would be happy if the remaining patch could get reviewed quickly.
>>>
>>
>>
>>
>>
>>>
>>> 1518693 
 ovirt-engine akrej...@redhat.com POST Quota is needed to copy template
 disk

>>>
>>> This is only via REST and the default quota can be used as a workaround
>>> - why is this a blocker?
>>>
>>
>> Automation added it because Raz marked it as Regression. But the change
>> was intentional.
>>
>>
>>
>>> 1517810 
 ovirt-engine stira...@redhat.com Adding additional ha-host fails.

>>>
>> This one has a verified engine fix now, we just need to merge it and
>> update Node 0 setup (also verified).
>>
>>
>> Martin
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [OST Failure] [oVirt Master] [HC] Hosted engine fails to install

2017-11-30 Thread Yaniv Kaul
On Thu, Nov 30, 2017 at 8:58 AM, Sahina Bose  wrote:

> Hi,
>
> The error with HE install is :  Starting vdsmd", "[ ERROR ] Failed to
> execute stage 'Misc configuration': Couldn't  connect to VDSM within 15
> seconds".
>
> Is there a configuration parameter that needs to be set to change the
> timeout, or is this a bug?
>

>From the log it doesn't seem it even waited 15 secs:
2017-11-29 21:45:03,031-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:813 execute: ('/usr/bin/systemctl', 'start',
'vdsmd.service'), executable='None', cwd='None', env=None
2017-11-29 21:45:05,881-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:863 execute-result: ('/usr/bin/systemctl', 'start',
'vdsmd.service'), rc=0
2017-11-29 21:45:05,882-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/usr/bin/systemctl', 'start',
'vdsmd.service') stdout:


2017-11-29 21:45:05,882-0500 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start',
'vdsmd.service') stderr:


2017-11-29 21:45:07,001-0500 DEBUG otopi.plugins.gr_he_setup.system.vdsmenv
util.__log_debug:374 VDSM jsonrpc connection is not ready
2017-11-29 21:45:07,002-0500 DEBUG otopi.plugins.gr_he_setup.system.vdsmenv
util.__log_debug:374 Creating a new json-rpc connection to VDSM
2017-11-29 21:45:07,202-0500 DEBUG otopi.plugins.gr_he_setup.system.vdsmenv
util.__log_debug:374 VDSM jsonrpc connection is not ready
2017-11-29 21:45:07,203-0500 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
  File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-setup/system/vdsmenv.py",
line 158, in _misc
timeout=ohostedcons.Const.VDSCLI_SSL_TIMEOUT,
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/util.py", line
442, in connect_vdsm_json_rpc
__vdsm_json_rpc_connect(logger, timeout)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/util.py", line
398, in __vdsm_json_rpc_connect
timeout=VDSM_MAX_RETRY * VDSM_DELAY
RuntimeError: Couldn't  connect to VDSM within 15 seconds
2017-11-29 21:45:07,204-0500 ERROR otopi.context context._executeMethod:152
Failed to execute stage 'Misc configuration': Couldn't  connect to VDSM
within 15 seconds
2017-11-29 21:45:07,205-0500 DEBUG otopi.transaction transaction.abort:119
aborting 'Yum Transaction'
2017-11-29 21:45:07,205-0500 INFO otopi.plugins.otopi.packagers.yumpackager
yumpackager.info:80 Yum Performing yum transaction rollback



> Logs at : http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-
> suite-master/107/artifact/exported-artifacts/test_logs/
> hc-basic-suite-master/post-002_bootstrap.py/lago-hc-
> basic-suite-master-host0/_var_log/ovirt-hosted-engine-setup/
> ovirt-hosted-engine-setup-20171129214218-7vns3t.log
>
> thanks
> sahina
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.2.0 blockers review - Day 3

2017-11-30 Thread Yaniv Kaul
On Thu, Nov 30, 2017 at 9:49 AM, Sandro Bonazzola 
wrote:

> Hi,
> we still have 4 open acknowledged blockers according to
> https://bugzilla.redhat.com/buglist.cgi?quicksearch=
> flag%3Ablocker%2B%20target_milestone%3Aovirt-4.2.0%
> 20status%3Anew%2Cassigned%2Cpost
>
> Bug ID Product Assignee Status Summary
> 1518887 
> ovirt-hosted-engine-ha b...@ovirt.org NEW ovirt-ha-agent fails parsing
> the OVF_STORE due to a change in OVF namespace URI
>

I'm in favor of reverting the virt change personally.

1516113  cockpit-ovirt
> phbai...@redhat.com POST Deploy the HostedEngine failed with the default
> CPU type
>

Would be happy if the remaining patch could get reviewed quickly.

1518693  ovirt-engine
> akrej...@redhat.com POST Quota is needed to copy template disk
>

This is only via REST and the default quota can be used as a workaround -
why is this a blocker?

1507277  ovirt-engine
> era...@redhat.com POST [RFE][DR] - Vnic Profiles mapping in VMs register
> from data storage domain should be supported also for templates
>

I'm not sure why it's a blocker for 4.2.0.


> There are also 3 proposed blockers that need either to be acknowledged or
> rejected: https://bugzilla.redhat.com/buglist.cgi?
> quicksearch=flag%3Ablocker%3F%20target_milestone%3Aovirt-4.
> 2.0%20status%3Anew%2Cassigned%2Cpost
>
> Bug ID Product Assignee Summary
> 1450061  Red Hat
> Enterprise Virtualization Manager rh-spice-b...@redhat.com Copy-paste:
> filename encoding in Win guest
>

Moved to 4.2.1. The Spice team did not touch it for quite some time since
it was opened.

1517810  ovirt-engine
> stira...@redhat.com Adding additional ha-host fails.
> 1502920  Red Hat
> Enterprise Virtualization Manager rba...@redhat.com File missing after
> upgrade of RHVH node from version RHVH-4.1-20170925.0 to latest.
>

Stuck on POST since2017-10-20 00:47:07 IDT ?


>
> Given current status I would reschedule oVirt 4.2.0 RC to next week,
> tentatively on Monday.
>

I'd review the status tomorrow as well - perhaps we can build the RC
tomorrow. I don't see too many blockers that cannot be completed today
above.
Y.

>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.2.0 blockers review - Day 2

2017-11-29 Thread Yaniv Kaul
On Wed, Nov 29, 2017 at 10:30 AM, Ala Hino <ah...@redhat.com> wrote:

>
>
> On Wed, Nov 29, 2017 at 10:09 AM, Yaniv Kaul <yk...@redhat.com> wrote:
>
>>
>>
>> On Wed, Nov 29, 2017 at 9:40 AM, Sandro Bonazzola <sbona...@redhat.com>
>> wrote:
>>
>>> Hi,
>>> we had 7 blockers yesterday and the list is now down to 4:
>>>
>>
>> (It'll be easier to review with hyperlinks)
>>
>>
>>> Bug ID Product Assignee Status Summary Changed
>>> 1516113 cockpit-ovirt phbai...@redhat.com POST Deploy the HostedEngine
>>> failed with the default CPU type 2017-11-27 20:52:27
>>>
>>
>> Unsure why it's not in MODIFIED state, if all patches were merged?
>>
>> 1509629 ovirt-engine ah...@redhat.com POST Cold merge failed to remove
>>> all volumes
>>>
>>
>> https://gerrit.ovirt.org/#/c/84821/ is waiting for Ala to verify - I
>> hope it gets in today.
>>
>
> Manual tests passed. 10 OSTs runs successfully passed. Waiting for Raz to
> ack automation run.
>

Please check Engine.log in OST - for some reason, the OST tests did not
fail on that (need to check our tests of why).
Y.


>
>> 2017-11-28 11:33:16
>>> 1507277 ovirt-engine era...@redhat.com POST [RFE][DR] - Vnic Profiles
>>> mapping in VMs register from data storage domain should be supported also
>>> for templates
>>>
>>
>> Doesn't strike me as a blocker. Could be moved to 4.2.1.
>>
>>
>>> 2017-11-28 06:38:34
>>> 1496719 vdsm edwa...@redhat.com POST Port mirroring is not set after VM
>>> migration 2017-11-28 11:54:20
>>>
>>>
>> Same.
>> Y.
>>
>>
>>> Looking at bug status and at yesterday mail thread all of the above
>>> should go in modified today.
>>> Please ping me when you move above to modified, if possible we'll build
>>> RC today, if not we'll have to delay another day.
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>> <http://www.teraplan.it/redhat-osd-2017/>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.2.0 blockers review - Day 2

2017-11-29 Thread Yaniv Kaul
On Wed, Nov 29, 2017 at 9:40 AM, Sandro Bonazzola 
wrote:

> Hi,
> we had 7 blockers yesterday and the list is now down to 4:
>

(It'll be easier to review with hyperlinks)


> Bug ID Product Assignee Status Summary Changed
> 1516113 cockpit-ovirt phbai...@redhat.com POST Deploy the HostedEngine
> failed with the default CPU type 2017-11-27 20:52:27
>

Unsure why it's not in MODIFIED state, if all patches were merged?

1509629 ovirt-engine ah...@redhat.com POST Cold merge failed to remove all
> volumes
>

https://gerrit.ovirt.org/#/c/84821/ is waiting for Ala to verify - I hope
it gets in today.

2017-11-28 11:33:16
> 1507277 ovirt-engine era...@redhat.com POST [RFE][DR] - Vnic Profiles
> mapping in VMs register from data storage domain should be supported also
> for templates
>

Doesn't strike me as a blocker. Could be moved to 4.2.1.


> 2017-11-28 06:38:34
> 1496719 vdsm edwa...@redhat.com POST Port mirroring is not set after VM
> migration 2017-11-28 11:54:20
>
>
Same.
Y.


> Looking at bug status and at yesterday mail thread all of the above should
> go in modified today.
> Please ping me when you move above to modified, if possible we'll build RC
> today, if not we'll have to delay another day.
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
> 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.2.0 blockers review

2017-11-28 Thread Yaniv Kaul
On Wed, Nov 29, 2017 at 12:10 AM, Nir Soffer  wrote:

> We merge the last multipath alerts patches today.
>

But engine side is not ready, so the feature is not yet usable.


>
> We have 2 pending patches for lvm filter:
> https://gerrit.ovirt.org/#/q/topic:lvm-filter+is:open
>

No one has reviewed them yet - I doubt we can.
Y.


>
> I would like to get these in the beta if we can.
>
> Nir
>
> On Tue, Nov 28, 2017 at 11:27 PM Martin Perina  wrote:
>
>> On Tue, Nov 28, 2017 at 11:58 AM, Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> I'm waiting for last blockers to be fixed for starting a 4.2.0 RC build.
>>> Assignee are in the TO list of this email.
>>> So far we are down to 7 bugs: https://bugzilla.redhat.
>>> com/buglist.cgi?quicksearch=flag%3Ablocker%2B%20target_
>>> milestone%3Aovirt-4.2.0%20status%3Anew%2Cassigned%2Cpost
>>>
>>> Please review them and provide an ETA for the fix. If the bug is marked
>>> as blocker by mistake, please remove the blocker flag and / or postpone the
>>> bug to a later release.
>>>
>>> Bug ID Product Assignee Status Summary
>>> 1516113 cockpit-ovirt phbai...@redhat.com POST Deploy the HostedEngine
>>> failed with the default CPU type
>>> 1509629 ovirt-engine ah...@redhat.com ASSIGNED Cold merge failed to
>>> remove all volumes
>>> 1507277 ovirt-engine era...@redhat.com POST [RFE][DR] - Vnic Profiles
>>> mapping in VMs register from data storage domain should be supported also
>>> for templates
>>> 1506677 ovirt-engine dchap...@redhat.com POST Hotplug fail when
>>> attaching a disk with cow format on glusterfs
>>> 1488338 ovirt-engine mlipc...@redhat.com NEW SPM host is not moving to
>>> Non-Operational status when blocking its access to storage domain.
>>>
>>
>> ​I've been able to reproduce that, attached new logs with description to
>> the bug. At the moment it doesn't seem to me like something that could be
>> fixed​
>> ​ easily and quickly​, we will continue investigation tomorrow
>>
>> ​​
>>> 1512534 ovirt-hosted-engine-ha pklic...@redhat.com ASSIGNED SHE
>>> deployment takes too much time and looks like stuck.
>>>
>>
>> ​Fix posted and merged​, bug is back at MODIFIED
>>
>>
>>> 1496719 vdsm edwa...@redhat.com POST Port mirroring is not set after VM
>>> migration
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>> 
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>> Martin Perina
>> Associate Manager, Software Engineering
>> Red Hat Czech s.r.o.
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt 4.2.0 blockers review

2017-11-28 Thread Yaniv Kaul
On Tue, Nov 28, 2017 at 6:35 PM, Ala Hino  wrote:

> Issue analyzed and fix verified by the automation test case.
> Now, running all merge related automation test cases to make sure no
> regression/new issues introduced.
> Patch in gerrit: https://gerrit.ovirt.org/#/c/84821/
>

Thanks - I think it's one of the last blockers for oVirt GA.
Y.


>
> On Tue, Nov 28, 2017 at 4:20 PM, Ala Hino  wrote:
>
>> 1509629 ovirt-engine ah...@redhat.com ASSIGNED Cold merge failed to
>> remove all volumes
>> This one only seen in automation and I am not able to reproduce it
>> locally.
>>
>>
>> On Tue, Nov 28, 2017 at 12:58 PM, Sandro Bonazzola 
>> wrote:
>>
>>> Hi,
>>> I'm waiting for last blockers to be fixed for starting a 4.2.0 RC build.
>>> Assignee are in the TO list of this email.
>>> So far we are down to 7 bugs: https://bugzilla.redhat.
>>> com/buglist.cgi?quicksearch=flag%3Ablocker%2B%20target_miles
>>> tone%3Aovirt-4.2.0%20status%3Anew%2Cassigned%2Cpost
>>>
>>> Please review them and provide an ETA for the fix. If the bug is marked
>>> as blocker by mistake, please remove the blocker flag and / or postpone the
>>> bug to a later release.
>>>
>>> Bug ID Product Assignee Status Summary
>>> 1516113 cockpit-ovirt phbai...@redhat.com POST Deploy the HostedEngine
>>> failed with the default CPU type
>>> 1509629 ovirt-engine ah...@redhat.com ASSIGNED Cold merge failed to
>>> remove all volumes
>>> 1507277 ovirt-engine era...@redhat.com POST [RFE][DR] - Vnic Profiles
>>> mapping in VMs register from data storage domain should be supported also
>>> for templates
>>> 1506677 ovirt-engine dchap...@redhat.com POST Hotplug fail when
>>> attaching a disk with cow format on glusterfs
>>> 1488338 ovirt-engine mlipc...@redhat.com NEW SPM host is not moving to
>>> Non-Operational status when blocking its access to storage domain.
>>> 1512534 ovirt-hosted-engine-ha pklic...@redhat.com ASSIGNED SHE
>>> deployment takes too much time and looks like stuck.
>>> 1496719 vdsm edwa...@redhat.com POST Port mirroring is not set after VM
>>> migration
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>> 
>>>
>>
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 23-11-2017 ] [ 001_initialize_engine.test_initialize_engine ]

2017-11-26 Thread Yaniv Kaul
On Sun, Nov 26, 2017 at 4:53 PM, Gal Ben Haim  wrote:

> We still see this issue on the upgrade suite from latest release to master
> [1].
> I don't see any evidence in "/var/log/messages" [2] that
> "ovirt-imageio-proxy" was started twice.
>

Since it's not a registered port and a high port, could it be used by
something else (what are the odds though ?
Is it consistent?
Y.


>
> [1] http://jenkins.ovirt.org/blue/rest/organizations/
> jenkins/pipelines/ovirt-master_change-queue-tester/
> runs/4153/nodes/123/steps/241/log/?start=0
>
> [2] http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/
> ovirt-master_change-queue-tester/4153/artifact/exported-
> artifacts/upgrade-from-release-suit-master-el7/test_
> logs/upgrade-from-release-suite-master/post-001_initialize_engine.py/lago-
> upgrade-from-release-suite-master-engine/_var_log/messages/*view*/
>
> On Fri, Nov 24, 2017 at 8:16 PM, Dafna Ron  wrote:
>
>> there were two different patches reported as failing cq today with the
>> ovirt-imageio-proxy service failing to start.
>>
>> Here is the latest failure: http://jenkins.ovirt.org/job/o
>> virt-master_change-queue-tester/4130/artifact
>>
>>
>>
>>
>> On 11/23/2017 03:39 PM, Allon Mureinik wrote:
>>
>> Daniel/Nir?
>>
>> On Thu, Nov 23, 2017 at 5:29 PM, Dafna Ron  wrote:
>>
>>> Hi,
>>>
>>> We have a failing on test 001_initialize_engine.test_initialize_engine.
>>>
>>> This is failing with error Failed to start service 'ovirt-imageio-proxy
>>>
>>>
>>> *Link and headline ofto suspected patches: *
>>>
>>>
>>> * build: Make resulting RPMs architecture-specific -
>>> https://gerrit.ovirt.org/#/c/84534/ 
>>> Link to Job:
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4055
>>> *
>>>
>>>
>>> *Link to all logs:*
>>>
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>>> r/4055/artifact/
>>>
>>> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4055/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-001_initialize_engine.py/lago-upgrade-from-release-suite-master-engine/_var_log/messages/*view*/
>>> *
>>>
>>>
>>>
>>> * (Relevant) error snippet from the log:  *from lago log:
>>>
>>> Failed to start service 'ovirt-imageio-proxy
>>>
>>> messages logs:
>>>
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine systemd: 
>>> Starting Session 8 of user root.
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: Traceback (most recent call last):
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File "/usr/bin/ovirt-imageio-proxy", line 85, in 
>>> 
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: status = image_proxy.main(args, config)
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File 
>>> "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/image_proxy.py", line 
>>> 21, in main
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: image_server.start(config)
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File 
>>> "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/server.py", line 45, 
>>> in start
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: WSGIRequestHandler)
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/SocketServer.py", line 419, 
>>> in __init__
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: self.server_bind()
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/wsgiref/simple_server.py", 
>>> line 48, in server_bind
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: HTTPServer.server_bind(self)
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/BaseHTTPServer.py", line 
>>> 108, in server_bind
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: SocketServer.TCPServer.server_bind(self)
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/SocketServer.py", line 430, 
>>> in server_bind
>>> Nov 23 07:30:47 lago-upgrade-from-release-suite-master-engine 
>>> ovirt-imageio-proxy: 

[ovirt-devel] ovirt.org presentation template

2017-11-23 Thread Yaniv Kaul
Hi,

Do we have such a template for presentation?
TIA,
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 22-11-2017 ] [ 004_basic_sanity.verify_add_vm_template ]

2017-11-22 Thread Yaniv Kaul
On Wed, Nov 22, 2017 at 7:55 PM, Nir Soffer  wrote:

>
>
> On Wed, Nov 22, 2017 at 7:15 PM Dafna Ron  wrote:
>
>> Hi,
>>
>> we have a failure in 004_basic_sanity.verify_add_vm_template.
>>
>> The error seems to be a failure from the api request since although I am
>> seeing errors in the logs I am not sure they are the cause.
>>
>>
>> *Link and headline ofto suspected patches: *
>>
>> * core: Cleanup BaseImagesCommand code -
>> https://gerrit.ovirt.org/#/c/83812/ *
>>
>>
>> *Link to Job:*
>> * http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3989
>> *
>>
>>
>> *Link to all logs:*
>> *
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3989/artifact
>> *
>>
>

Relevant logs start here -
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3989/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-004_basic_sanity.py/

Specifically, server.log (
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3989/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/server.log
):

2017-11-22 10:40:14,334-05 ERROR [org.jboss.as.ejb3.invocation]
(EE-ManagedThreadFactory-engineScheduled-Thread-29) WFLYEJB0034: EJB
Invocation failed on component Backend for method public abstract
org.ovirt.engine.core.common.action.ActionReturnValue
org.ovirt.engine.core.bll.interfaces.BackendInternal.endAction(org.ovirt.engine.core.common.action.ActionType,org.ovirt.engine.core.common.action.ActionParametersBase,org.ovirt.engine.core.bll.context.CommandContext):
javax.ejb.EJBTransactionRolledbackException
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.handleInCallerTx(CMTTxInterceptor.java:160)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:257)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:381)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:244)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73)
[weld-core-impl-2.4.3.Final.jar:2.4.3.Final]
at
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:89)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22)
[wildfly-ejb3-11.0.0.Final.jar:11.0.0.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)

...

Caused by: java.lang.NullPointerException
at java.util.Objects.requireNonNull(Objects.java:203) [rt.jar:1.8.0_151]
at java.util.Optional.(Optional.java:96) [rt.jar:1.8.0_151]
at java.util.Optional.of(Optional.java:108) [rt.jar:1.8.0_151]
at java.util.stream.FindOps$FindSink$OfRef.get(FindOps.java:193)
[rt.jar:1.8.0_151]
at java.util.stream.FindOps$FindSink$OfRef.get(FindOps.java:190)
[rt.jar:1.8.0_151]
at java.util.stream.FindOps$FindOp.evaluateSequential(FindOps.java:152)
[rt.jar:1.8.0_151]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
[rt.jar:1.8.0_151]
at 
java.util.stream.ReferencePipeline.findFirst(ReferencePipeline.java:464)
[rt.jar:1.8.0_151]
at 
org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.setQcowCompatByQemuImageInfo(BaseImagesCommand.java:432)
[bll.jar:]
at 
org.ovirt.engine.core.bll.storage.disk.image.BaseImagesCommand.endSuccessfully(BaseImagesCommand.java:393)
[bll.jar:]
at 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 20-11-1017 ] [ 002_bootstrap.verify_add_all_hosts ]

2017-11-20 Thread Yaniv Kaul
On Mon, Nov 20, 2017 at 3:10 PM, Dafna Ron  wrote:

> Hi,
>
> We had a failure in OST for test 002_bootstrap.verify_add_all_hosts.
>
> From the logs I can see that vdsm on host0 was reporting that it cannot
> find the physical volume but eventually the storage was created and is
> reported as responsive.
>
> However, Host1 is reported to became non-operational with storage domain
> does not exist error and I think that there is a race.
>

I've opened https://bugzilla.redhat.com/show_bug.cgi?id=1514906 on this.


> I think that we create the storage domain while host1 is being installed
> and if the domain is not created and reported as activated in time, host1
> will become nonOperational.
>

And based on the above description, this is exactly the issue I've
described in the BZ.
Y.


> are we starting installation of host1 before host0 and storage are active?
>
> *Link to suspected patches: I do not think that the patch reported is
> related to the error*
>
>
> * https://gerrit.ovirt.org/#/c/84133/
>  Link to Job: *
>
>
>
> * http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/
>  Link
> to all logs: *
>
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3902/artifact/
> 
> *
>
>
> *(Relevant) error snippet from the log: *
>
>
>
> *  Lago log: *
>
> 2017-11-18 11:15:25,472::log_utils.py::end_log_task::670::nose::INFO::  #
> add_master_storage_domain: ESC[32mSuccessESC[0m (in 0:01:09)
> 2017-11-18 11:15:25,472::log_utils.py::start_log_task::655::nose::INFO::
> # add_secondary_storage_domains: ESC[0mESC[0m
> 2017-11-18 11:16:47,455::log_utils.py::end_log_task::670::nose::INFO::  #
> add_secondary_storage_domains: ESC[32mSuccessESC[0m (in 0:01:21)
> 2017-11-18 11:16:47,456::log_utils.py::start_log_task::655::nose::INFO::
> # import_templates: ESC[0mESC[0m
> 2017-11-18 11:16:47,513::testlib.py::stopTest::198::nose::INFO::*
> SKIPPED: Exported domain generation not supported yet
> 2017-11-18 11:16:47,514::log_utils.py::end_log_task::670::nose::INFO::  #
> import_templates: ESC[32mSuccessESC[0m (in 0:00:00)
> 2017-11-18 11:16:47,514::log_utils.py::start_log_task::655::nose::INFO::
> # verify_add_all_hosts: ESC[0mESC[0m
> 2017-11-18 
> 11:16:47,719::testlib.py::assert_equals_within::227::ovirtlago.testlib::ERROR::
> * Unhandled exception in  at 0x2909230>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 219,
> in assert_equals_within
> res = func()
>   File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 430, in 
> lambda: _all_hosts_up(hosts_service, total_hosts)
>   File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 129, in _all_hosts_up
> _check_problematic_hosts(hosts_service)
>   File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-scenarios/002_bootstrap.py",
> line 149, in _check_problematic_hosts
> raise RuntimeError(dump_hosts)
> RuntimeError: 1 hosts failed installation:
> lago-basic-suite-master-host-1: non_operational
>
> 2017-11-18 11:16:47,722::utils.py::wrapper::480::lago.utils::DEBUG::Looking
> for a workdir
> 2017-11-18 
> 11:16:47,722::workdir.py::resolve_workdir_path::361::lago.workdir::DEBUG::Checking
> if /dev/shm/ost/deployment-basic-suite-master is a workdir
> 2017-11-18 11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::
> * Collect artifacts: ESC[0mESC[0m
> 2017-11-18 11:16:47,724::log_utils.py::__enter__::600::lago.prefix::INFO::
> * Collect artifacts: ESC[0mESC[0m
>
> vdsm host0:
>
> 2017-11-18 06:14:23,980-0500 INFO  (jsonrpc/0) [vdsm.api] START
> getDeviceList(storageType=3, guids=[u'360014059618895272774e97a2aaf5dd6'],
> checkStatus=False, options={}) from=:::192.168.201.4,45636,
> flow_id=ed8310a1-a7af-4a67-b351-8ff
> 364766b8a, task_id=6ced0092-34cd-49f0-aa0f-6aae498af37f (api:46)
> 2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.LVM] lvm pvs
> failed: 5 [] ['  Failed to find physical volume "/dev/mapper/
> 360014059618895272774e97a2aaf5dd6".'] (lvm:322)
> 2017-11-18 06:14:24,353-0500 WARN  (jsonrpc/0) [storage.HSM] getPV failed
> for guid: 360014059618895272774e97a2aaf5dd6 (hsm:1973)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1970,
> in _getDeviceList
> pv = lvm.getPV(guid)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/lvm.py", line 852,
> in getPV
> raise se.InaccessiblePhysDev((pvName,))
> InaccessiblePhysDev: Multipath cannot access physical device(s):
> "devices=(u'360014059618895272774e97a2aaf5dd6',)"
> 2017-11-18 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 14-11-2017 ] [ 002_bootstrap.add_hosts ]

2017-11-15 Thread Yaniv Kaul
On Wed, Nov 15, 2017 at 4:00 PM, Yedidyah Bar David  wrote:

> On Wed, Nov 15, 2017 at 1:35 PM, Dafna Ron  wrote:
> > Didi,
> >
> > Thank you for your detailed explanation and for taking the time to debug
> > this issue.
> >
> > I opened the following Jira's:
> >
> > 1. for increasing entropy in the hosts:
> > https://ovirt-jira.atlassian.net/browse/OVIRT-1763
>
> I do not think anymore this is the main reason for slowness, although
> it might still be useful to verify/improve.
>
> yuvalt pointed out in a private discussion that openssl lib does nothing
> related to random numbers in its pre/post install scripts.
>
> It does call ldconfig, as do several other packages, which can take
> quite a lot of time. Some took less. 3 minutes is definitely not
> reasonable.
>
> Also see this, from engine log:
>
> 2017-11-13 11:07:17,026-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
> update: 398/570: 1:NetworkManager-team-1.8.0-11.el7_4.x86_64.
> 2017-11-13 11:07:30,573-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
> obsoleting: 399/570: 1:NetworkManager-ppp-1.8.0-11.el7_4.x86_64.
> 2017-11-13 11:07:45,137-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
> update: 400/570: 1:NetworkManager-tui-1.8.0-11.el7_4.x86_64.
> 2017-11-13 11:07:57,842-05 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [45082420] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509),
> Installing Host lago-upgrade-from-release-suite-master-host0. Yum
> update: 401/570: audit-2.7.6-3.el7.x86_64.
>
> That's ~ 15 seconds per package, and the first 3 have no scripts at all.
>
> I'd say there is some serious storage issue there - bad disk, loaded
> storage network/hardware, something like this.
>

That was my guess as well. Sometimes selinux relabeling takes time (I
believe ovirt console does that).
Y.


>
> >
> > 2. I added a comment to the not all logs are downloaded Jira regarded a
> > workaround which would save the logs on a different location:
> > https://ovirt-jira.atlassian.net/browse/OVIRT-1583
> >
> > 3. adding /tmp logs to the job logs:
> > https://ovirt-jira.atlassian.net/browse/OVIRT-1764
> >
> > Again, thank you for your help Didi.
> >
> > Dafna
> >
> >
> >
> > On 11/15/2017 09:01 AM, Yedidyah Bar David wrote:
> >> On Tue, Nov 14, 2017 at 5:48 PM, Dafna Ron  wrote:
> >>> Hi,
> >>>
> >>> We had a failure in upgrade suite for 002_bootstrap.add_hosts. I am not
> >>> seeing any error that can suggest on an issue in engine.
> >>>
> >>> I can see in the host messages host that we have stopped writing to
> the log
> >>> for 15 minutes and it may suggest that there is something that is
> keeping
> >>> the host from starting which causes us to fail the test on timeout.
> >>>
> >>> However,  i can use some help in determining the cause for this
> failure and
> >>> weather its connected to the bootstrap_add_host test in upgrade.
> >>>
> >>> Link to suspected patches: As I said, I do not think its related, but
> this
> >>> is the patch that was reported.
> >>>
> >>> https://gerrit.ovirt.org/#/c/83854/
> >>>
> >>>
> >>> Link to Job:
> >>>
> >>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3795/
> >>>
> >>>
> >>> Link to all logs:
> >>>
> >>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/3795/artifact/
> >>>
> >>>
> >>> (Relevant) error snippet from the log:
> >>>
> >>> 
> >>>
> >>> Test error:
> >>>
> >>> Error Message
> >>>
> >>> False != True after 900 seconds
> >>>
> >>> Stacktrace
> >>>
> >>> Traceback (most recent call last):
> >>>   File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> >>> testMethod()
> >>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
> runTest
> >>> self.test(*self.arg)
> >>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 129, in
> >>> wrapped_test
> >>> test()
> >>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 59, in
> >>> wrapper
> >>> return func(get_test_prefix(), *args, **kwargs)
> >>>   File
> >>> "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/upgrade-from-release-suite-
> master/test-scenarios-after-upgrade/002_bootstrap.py",
> >>> line 187, in add_hosts
> >>> testlib.assert_true_within(_host_is_up_4, timeout=15*60)
> >>>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
> 263, in
> >>> assert_true_within
> >>> assert_equals_within(func, True, timeout, 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ 06-11-2017 ] [ 002_bootstrap.verify_add_hosts ]

2017-11-06 Thread Yaniv Kaul
On Mon, Nov 6, 2017 at 1:39 PM, Dafna Ron  wrote:

> Hi,
>
> We failed test 002_bootstrap.verify_add_hosts
>
> I can see we only tried to install one of the hosts (host-0) and failed.
> the second host has no log which means we did not try to deploy it.
>
> The error suggests that we ovirt-imageio-daemon failed to start. However,
> there is another message that I think should be addressed about conflicting
> vdsm and libvirt configurations.
>
> *Link to suspected patches: https://gerrit.ovirt.org/#/c/83612/
> *
>
>
> * Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3626/
>  Link
> to all logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3626/artifact/
> 
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3626/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/host-deploy/ovirt-host-deploy-20171106025647-lago-basic-suite-master-host-0-5530ab1f.log
> *
>
>
> *(Relevant) error snippet from the log: *
>
> *  \*
>
> 2017-11-06 02:56:46,526-0500 DEBUG 
> otopi.plugins.ovirt_host_deploy.vdsm.packages plugin.execute:921 
> execute-output: ('/usr/bin/vdsm-tool', 'configure', '--force') stdout:
>
> Checking configuration status...
>
> abrt is not configured for vdsm
> WARNING: LVM local configuration: /etc/lvm/lvmlocal.conf is not based on vdsm 
> configuration
> lvm requires configuration
> libvirt is not configured for vdsm yet
> FAILED: conflicting vdsm and libvirt-qemu tls configuration.
> vdsm.conf with ssl=True requires the following changes:
> libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1
> qemu.conf: spice_tls=1.
> multipath requires configuration
>
>
> 2017-11-06 02:56:47,551-0500 DEBUG otopi.plugins.otopi.services.systemd 
> plugin.execute:926 execute-output: ('/usr/bin/systemctl', 'start', 
> 'ovirt-imageio-daemon.service') stderr:
> Job for ovirt-imageio-daemon.service failed because the control process 
> exited with error code. See "systemctl status ovirt-imageio-daemon.service" 
> and "journalctl -xe" for details.
>
> 2017-11-06 02:56:47,552-0500 DEBUG otopi.context context._executeMethod:143 
> method exception
> Traceback (most recent call last):
>   File "/tmp/ovirt-R4R8gZhaQI/pythonlib/otopi/context.py", line 133, in 
> _executeMethod
> method['method']()
>   File 
> "/tmp/ovirt-R4R8gZhaQI/otopi-plugins/ovirt-host-deploy/vdsm/packages.py", 
> line 179, in _start
> self.services.state('ovirt-imageio-daemon', True)
>   File "/tmp/ovirt-R4R8gZhaQI/otopi-plugins/otopi/services/systemd.py", line 
> 141, in state
> service=name,
> RuntimeError: Failed to start service 'ovirt-imageio-daemon'
> 2017-11-06 02:56:47,553-0500 ERROR otopi.context context._executeMethod:152 
> Failed to execute stage 'Closing up': Failed to start service 
> 'ovirt-imageio-daemon'
>
> **
>

The problem:
Nov  6 02:56:47 lago-basic-suite-master-host-0 systemd: Starting oVirt
ImageIO Daemon...
Nov  6 02:56:47 lago-basic-suite-master-host-0 python: detected unhandled
Python exception in '/usr/bin/ovirt-imageio-daemon'
Nov  6 02:56:47 lago-basic-suite-master-host-0 python: can't communicate
with ABRT daemon, is it running? [Errno 2] No such file or directory
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon:
Traceback (most recent call last):
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon: File
"/usr/bin/ovirt-imageio-daemon", line 14, in 
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon:
server.main(sys.argv)
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon: File
"/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py", line 57,
in main
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon:
start(config)
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon: File
"/usr/lib/python2.7/site-packages/ovirt_imageio_daemon/server.py", line 85,
in start
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon:
WSGIRequestHandler)
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon: File
"/usr/lib64/python2.7/SocketServer.py", line 419, in __init__
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon:
self.server_bind()
Nov  6 02:56:47 lago-basic-suite-master-host-0 ovirt-imageio-daemon: File
"/usr/lib64/python2.7/wsgiref/simple_server.py", line 48, in server_bind
Nov  6 02:56:47 

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 01 Nov 2017 ] [ 098_ovirt_provider_ovn.test_ovn_provider_rest ]

2017-11-05 Thread Yaniv Kaul
On Nov 5, 2017 5:38 PM, "Eyal Edri"  wrote:

Dafna,

Can you verify it fixes the problem? ( i.e we'll need to revert the
disabling of the test )


My patch also reverts.
Y.


On Fri, Nov 3, 2017 at 10:12 PM, Dan Kenigsberg  wrote:

> Kaul suggested a fix https://gerrit.ovirt.org/#/c/83569/ ; please
> consider taking it in.
>
> On Wed, Nov 1, 2017 at 3:24 PM, Eyal Edri  wrote:
>
>> Adding Marcin.
>>
>> On Wed, Nov 1, 2017 at 12:01 PM, Dafna Ron  wrote:
>>
>>> Hi,
>>>
>>> 098_ovirt_provider_ovn.test_ovn_provider_rest failed on removing the
>>> interface from a running vm.
>>>
>>> I have seen this before, do we perhaps have a race in OST where the vm
>>> is still running at times?
>>>
>>> *Link to suspected patches: Patch reported is below but I am suspecting
>>> its a race and not related*
>>>
>>>
>>> *https://gerrit.ovirt.org/#/c/83414/
>>>  *
>>>
>>> *Link to Job:*
>>>
>>> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3558/
>>> *
>>>
>>>
>>> *Link to all logs:*
>>>
>>>
>>>
>>>
>>>
>>> *
>>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3558/artifact/
>>> 
>>> (Relevant) error snippet from the log:  2017-10-31 10:58:43,516-04
>>> ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource]
>>> (default task-32) [] Operation Failed: [Cannot remove Interface. The VM
>>> Network Interface is plugged to a running VM.]  *
>>>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>> ___
>> Infra mailing list
>> in...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/infra
>>
>>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.1] [ 31 Oct 2017 ] [ 002_bootstrap.add_dc ]

2017-10-31 Thread Yaniv Kaul
We shouldn't be installing ovirt-host (at the moment) in 4.1.
If we are, it's probably because I messed up somewhere...

And it should not be installed ovirt-host-4.2.0 anyway (that's not my
fault, I hope!)
Y.


On Tue, Oct 31, 2017 at 1:57 PM, Dafna Ron  wrote:

> Hi,
>
> we had a failure in 002_bootstrap.add_dc in ovirt 4.1.
>
> I think that it's a dependency packaging issue for ruby-gem with the
> ovirt-host packages.
>
> *Link to suspected patches: https://gerrit.ovirt.org/#/c/83403/1
> *
>
> * Link to Job: *
>
> http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1227
>
>
> *Link to all logs:*
>
> *
> http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1227/testReport/junit/(root)/002_bootstrap/add_dc/
> 
> *
>
> *
> http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1227/artifact/exported-artifacts/basic-suit-4.1-el7/test_logs/basic-suite-4.1/post-002_bootstrap.py/lago_logs/lago.log
> 
> http://jenkins.ovirt.org/job/ovirt-4.1_change-queue-tester/1227/artifact/
> 
> *
>
> * (Relevant) error snippet from the log: *
>
>
> *  *from the test:
>
> Error: The response content type 'text/html; charset=UTF-8' isn't the 
> expected JSON
>
> looking at the logs:
>
> ---> Package ruby-irb.noarch 0:2.0.0.648-30.el7 will be installed
> ---> Package rubygem-json.x86_64 0:1.7.7-30.el7 will be installed
> --> Finished Dependency Resolution
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
> 2017-10-31 11:18:16,122::ssh.py::ssh::96::lago.ssh::DEBUG::Command 2fed0ea0 
> on lago-basic-suite-4-1-host-0  errors:
>  Error: Package: 
> ovirt-host-4.2.0-0.0.master.20171019135233.git1921fc6.el7.centos.noarch 
> (alocalsync)
>Requires: vdsm-client >= 4.20.0
>Available: vdsm-client-4.19.35-2.gitc1d5a55.el7.centos.noarch 
> (alocalsync)
>vdsm-client = 4.19.35-2.gitc1d5a55.el7.centos
> Error: Package: 
> ovirt-host-4.2.0-0.0.master.20171019135233.git1921fc6.el7.centos.noarch 
> (alocalsync)
>Requires: vdsm >= 4.20.0
>Available: vdsm-4.19.35-2.gitc1d5a55.el7.centos.x86_64 (alocalsync)
>vdsm = 4.19.35-2.gitc1d5a55.el7.centos
>
> **
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 26-10-2017 ] [ 004_basic_sanity.add_nic ]

2017-10-26 Thread Yaniv Kaul
On Thu, Oct 26, 2017 at 3:38 PM, Shlomo Ben David 
wrote:

> Hi,
>
> Link to suspected patches: https://gerrit.ovirt.org/#/c/71622/6
>
> Link to Job: http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/3476/console
>
> Link to all logs: http://jenkins.ovirt.org/job/ovirt-master_change-
> queue-tester/3476/artifact/exported-artifacts/basic-suit-master-el7/
>
> (Relevant) error snippet from the log:
> 
>
> Traceback (most recent call last): File 
> "/usr/lib64/python2.7/unittest/case.py",
> line 369, in run testMethod() File 
> "/usr/lib/python2.7/site-packages/nose/case.py",
> line 197, in runTest self.test(*self.arg) File "/usr/lib/python2.7/site-
> packages/ovirtlago/testlib.py", line 129, in wrapped_test test() File
> "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in
> wrapper return func(get_test_prefix(), *args, **kwargs) File
> "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 68, in
> wrapper return func(prefix.virt_env.engine_vm().get_api(), *args,
> **kwargs) File "/home/jenkins/workspace/ovirt-master_change-queue-
> tester/ovirt-system-tests/basic-suite-master/test-
> scenarios/004_basic_sanity.py", line 147, in add_nic
> api.vms.get(VM2_NAME).nics.add(nic_params) File "/usr/lib/python2.7/site-
> packages/ovirtsdk/infrastructure/brokers.py", line 33398, in add
> headers={"Correlation-Id":correlation_id, "Expect":expect} File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line
> 79, in add return self.request('POST', url, body, headers, cls=cls) File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/proxy.py", line
> 122, in request persistent_auth=self.__persistent_auth File
> "/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py",
> line 79, in do_request persistent_auth) File "/usr/lib/python2.7/site-
> packages/ovirtsdk/infrastructure/connectionspool.py", line 162, in
> __do_request raise errors.RequestError(response_code, response_reason,
> response_body) RequestError: status: 400 reason: Bad Request detail:
>
> 
>
>
Engine.log:
2017-10-26 07:27:10,257-04 WARN
[org.ovirt.engine.core.bll.network.vm.AddVmInterfaceCommand] (default
task-2) [a850a769-38cb-4f1c-bb1f-dec679f48bb6] Validation of action
'AddVmInterface' failed for user admin@internal-authz. Reasons:
VAR__TYPE__INTERFACE,VAR__ACTION__ADD
2017-10-26 07:27:10,258-04 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-2) [a850a769-38cb-4f1c-bb1f-dec679f48bb6] method: runAction,
params: [AddVmInterface,
AddVmInterfaceParameters:{commandId='ef40e8fd-5c6f-4d55-8cc1-d18bfd9f17ee',
user='null', commandType='Unknown',
vmId='63f3e505-9c94-4f37-a7c3-89a4fb5dc449',
nic='VmNetworkInterface:{id='null', name='eth0', networkName='null',
vnicProfileName='null',
vnicProfileId='4c5e1f01-129c-4922-b299-30c57e42b7ee', speed='null',
type='2', macAddress='0a:1a:4a:16:01:51', active='true', linked='true',
portMirroring='false', vmId='null', vmName='null', vmTemplateId='null',
QoSName='null', remoteNetworkName='null'}', network='null'}], timeElapsed:
16ms
2017-10-26 07:27:10,259-04 ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-2) [] Operation Failed: []



> Best Regards,
>
> Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
> RHCSA  | RHCE | RHCVA
> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
>
> OPEN SOURCE - 1 4 011 && 011 4 1
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] taskset (in supervdsm) for all CPUs?

2017-10-26 Thread Yaniv Kaul
I'm looking at some supervdsm log, and I'm seeing these commands:
/usr/bin/taskset --cpu-list 0-55 /usr/libexec/vdsm/fc-scan

and:
/usr/bin/taskset --cpu-list 0-55 /usr/bin/mount -t nfs -o
soft,nosharecache,timeo=600,retrans=6 10.10.10.1:/NFS/images

And so on (also for iscsiadm).
Why do we need to taskset commands if we allow them to run on all CPUs? Why
would we allow them to run on all CPUs anyway?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Host installation failure - master

2017-10-24 Thread Yaniv Kaul
OK, it seems to fail when I'm using Jumbo frames everywhere.
Works will with mtu 1500.
Y.

On Mon, Oct 23, 2017 at 10:55 PM, Martin Perina <mper...@redhat.com> wrote:

>
>
> On Mon, Oct 23, 2017 at 9:38 PM, Roy Golan <rgo...@redhat.com> wrote:
>
>>
>>
>> On Mon, 23 Oct 2017 at 21:51 Martin Perina <mper...@redhat.com> wrote:
>>
>>> On Mon, Oct 23, 2017 at 6:21 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>>>
>>>> I'm failing to install hosts on o-s-t on Master.
>>>> What worries me is not that I'm failing (though it is a bit of a
>>>> surprise, perhaps something I've done?), but that there are no logs around
>>>> it.
>>>>
>>>
>>> ​Please see my response below, but which logs are you ​
>>> ​missing?
>>> ​
>>> ​
>>>
>>>>
>>>> /var/log/ovirt-engine/host-deploy is empty and so is
>>>> /var/log/ovirt-engine/ansible.
>>>>
>>>
>>> Logs for both part of host installation (host-deploy and ansible) are​
>>>
>>> ​in /var/log/ovirt-engine/host-deploy, but they are created once each
>>> part successfully started.
>>> ​
>>>
>>>>
>>>>
>>>> All I'm seeing:
>>>> Host lago-basic-suite-master-host-0 installation failed. Unexpected
>>>> connection termination.
>>>>
>>>> Server.log:
>>>> 2017-10-23 12:16:33,041-04 WARN  
>>>> [org.apache.sshd.client.session.ClientSessionImpl]
>>>> (sshd-SshClient[346b54f3]-nio2-thread-2) Exception caught:
>>>> java.io.IOException: Connection timed out
>>>> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>>>> [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
>>>> [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
>>>> [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.IOUtil.write(IOUtil.java:65) [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishWrite(Uni
>>>> xAsynchronousSocketChannelImpl.java:582) [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsyn
>>>> chronousSocketChannelImpl.java:190) [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsy
>>>> nchronousSocketChannelImpl.java:213) [rt.jar:1.8.0_151]
>>>> at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)
>>>> [rt.jar:1.8.0_151]
>>>> at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151]
>>>>
>>>>
>>>> Engine.log:
>>>>
>>>> 2017-10-23 12:16:33,046-04 DEBUG 
>>>> [org.ovirt.engine.core.uutils.ssh.SSHClient]
>>>> (EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] Executed: 'umask
>>>> 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-X
>>>> X)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr
>>>> \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}"
>>>> -x &&  "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine DIALO
>>>> G/customization=bool:True'
>>>> 2017-10-23 12:16:33,056-04 ERROR 
>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>>> (VdsDeploy) [83a00e1] Error during deploy dialog
>>>>
>>>
>>> ​So this means that SSH connection to the host using which the host
>>> deploy should be started failed. The reason is above in server.log, that
>>> SSH connection timed out. This error appears even before host-deploy is
>>> executed, that's we don't have any host-deploy log created.
>>> ​
>>>
>>>
>>>> 2017-10-23 12:16:33,057-04 DEBUG 
>>>> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
>>>> (EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] execute leave
>>>> 2017-10-23 12:16:33,057-04 ERROR 
>>>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
>>>> (EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] Error during host
>>>> lago-basic-suite-master-host-0 install
>>>> 2017-10-23 12:16:33,065-04 ERROR [org.ovirt.engine.core.dal.dbb
>>>> roker.auditloghandling.AuditLogDirector] 
>>>> (EE-ManagedThreadFactory-engine-Thread-1)
>>>> [83

[ovirt-devel] Host installation failure - master

2017-10-23 Thread Yaniv Kaul
I'm failing to install hosts on o-s-t on Master.
What worries me is not that I'm failing (though it is a bit of a surprise,
perhaps something I've done?), but that there are no logs around it.

/var/log/ovirt-engine/host-deploy is empty and so is
/var/log/ovirt-engine/ansible.


All I'm seeing:
Host lago-basic-suite-master-host-0 installation failed. Unexpected
connection termination.

Server.log:
2017-10-23 12:16:33,041-04 WARN
[org.apache.sshd.client.session.ClientSessionImpl]
(sshd-SshClient[346b54f3]-nio2-thread-2) Exception caught:
java.io.IOException: Connection timed out
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
[rt.jar:1.8.0_151]
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
[rt.jar:1.8.0_151]
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
[rt.jar:1.8.0_151]
at sun.nio.ch.IOUtil.write(IOUtil.java:65) [rt.jar:1.8.0_151]
at
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishWrite(UnixAsynchronousSocketChannelImpl.java:582)
[rt.jar:1.8.0_151]
at
sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:190)
[rt.jar:1.8.0_151]
at
sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)
[rt.jar:1.8.0_151]
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)
[rt.jar:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151]


Engine.log:

2017-10-23 12:16:33,046-04 DEBUG
[org.ovirt.engine.core.uutils.ssh.SSHClient]
(EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] Executed: 'umask 0077;
MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp -d -t ovirt-X
X)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm -fr
\"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp -C "${MYTMP}"
-x &&  "${MYTMP}"/ovirt-host-deploy DIALOG/dialect=str:machine DIALO
G/customization=bool:True'
2017-10-23 12:16:33,056-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) [83a00e1]
Error during deploy dialog
2017-10-23 12:16:33,057-04 DEBUG
[org.ovirt.engine.core.uutils.ssh.SSHDialog]
(EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] execute leave
2017-10-23 12:16:33,057-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] Error during host
lago-basic-suite-master-host-0 install
2017-10-23 12:16:33,065-04 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] EVENT_ID:
VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during
installation of Host lago-basic-suite-master-host-0: Unexpected connection
termination.
2017-10-23 12:16:33,065-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase]
(EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] Error during host
lago-basic-suite-master-host-0 install, preferring first exception:
Unexpected connection termination
2017-10-23 12:16:33,065-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [83a00e1] Host installation
failed for host 'c4138375-aa53-4c36-8907-306803ae4282',
'lago-basic-suite-master-host-0': Unexpected connection termination
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] vdsm spec file rpmlint

2017-10-20 Thread Yaniv Kaul
On Fri, Oct 20, 2017 at 7:56 PM, Nir Soffer  wrote:

> On Fri, Oct 20, 2017 at 11:37 AM Sandro Bonazzola 
> wrote:
>
>> Just an heads up we have improvement margin on vdsm spec file quality.
>>
>> [sbonazzo@sbonazzo SPECS]$ rpmlint ./vdsm.spec
>> /var/lib/mock/epel-7-x86_64/result/*rpm
>> vdsm.x86_64: E: explicit-lib-dependency libnl3
>>
>
> What is wrong with this?
>
>
>> vdsm.x86_64: E: explicit-lib-dependency libvirt-client
>> vdsm.x86_64: E: explicit-lib-dependency libvirt-daemon-config-nwfilter
>> vdsm.x86_64: E: explicit-lib-dependency libvirt-lock-sanlock
>> vdsm.x86_64: W: obsolete-not-provided vdsm-infra
>> vdsm.x86_64: E: no-binary
>>
>
> Ha?
>
>
>> vdsm.x86_64: W: conffile-without-noreplace-flag /var/log/vdsm/mom.log
>>
>
> This is not a conf file, we should probably change this
>
>
>> vdsm.x86_64: W: conffile-without-noreplace-flag
>> /var/log/vdsm/supervdsm.log
>> vdsm.x86_64: W: conffile-without-noreplace-flag /var/log/vdsm/vdsm.log
>> vdsm.x86_64: W: non-conffile-in-etc /etc/NetworkManager/conf.d/vdsm.conf
>>
>
> Ha?
>

Perhaps:
%{_sysconfdir}/NetworkManager/conf.d/vdsm.conf

should be:
%config(noreplace) %{_sysconfdir}/NetworkManager/conf.d/vdsm.conf



>
>> vdsm.x86_64: W: non-conffile-in-etc /etc/modprobe.d/vdsm-bonding-
>> modprobe.conf
>> vdsm.x86_64: E: non-readable /etc/pki/vdsm/keys/libvirt_password 600
>> vdsm.x86_64: W: non-conffile-in-etc /etc/security/limits.d/99-vdsm.conf
>>
>
> Ha?
>

Same?


>
>
>> vdsm.x86_64: W: non-conffile-in-etc /etc/sudoers.d/50_vdsm
>> vdsm.x86_64: W: systemd-unit-in-etc /etc/systemd/system/libvirtd.
>> service.d/unlimited-core.conf
>> vdsm.x86_64: W: non-conffile-in-etc /etc/systemd/system/libvirtd.
>> service.d/unlimited-core.conf
>> vdsm.x86_64: E: zero-length /etc/vdsm/mom.d/01-parameters.policy
>> vdsm.x86_64: E: wrong-script-interpreter /usr/libexec/vdsm/kvm2ovirt
>> /usr/bin/env python
>>
>
> This used to be the recommended way to write scripts, but it is easy to
> replace with /usr/bin/python2.
>
>
>> vdsm.x86_64: E: wrong-script-interpreter /usr/libexec/vdsm/vm_migrate_hook.py
>> /usr/bin/env python
>> vdsm.x86_64: E: wrong-script-interpreter 
>> /usr/share/vdsm/virt/vm_migrate_hook.py
>> /usr/bin/env python
>> vdsm.x86_64: E: non-executable-script /usr/share/vdsm/virt/vm_migrate_hook.py
>> 644 /usr/bin/env python
>> vdsm.x86_64: E: non-standard-dir-perm /var/lib/libvirt/qemu/channels 775
>> vdsm.x86_64: E: non-standard-dir-perm /var/log/core 1777
>> vdsm.x86_64: E: dir-or-file-in-var-run /var/run/vdsm
>>
>
> What is wrong with this?
>

"/var/run may be a temporary filesystem, so any directories or files needed
there must be created dynamically at boot time."

Y.

>
>
>> vdsm.x86_64: E: dir-or-file-in-var-run /var/run/vdsm/payload
>> vdsm.x86_64: E: dir-or-file-in-var-run /var/run/vdsm/sourceRoutes
>> vdsm.x86_64: E: dir-or-file-in-var-run /var/run/vdsm/trackedInterfaces
>> vdsm.x86_64: E: dir-or-file-in-var-run /var/run/vdsm/v2v
>> vdsm.x86_64: W: log-files-without-logrotate ['/var/log/core',
>> '/var/log/vdsm']
>>
>
> We have logrotate configuration for vdsm, but we don't use the standard
> configuration since we need more frequent rotation.
>
>
>> vdsm.x86_64: W: dangerous-command-in-%pre rpm
>> vdsm.x86_64: W: dangerous-command-in-%post chmod
>> vdsm-api.noarch: E: wrong-script-interpreter 
>> /usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py
>> /usr/bin/env python
>> vdsm-api.noarch: E: non-executable-script 
>> /usr/lib/python2.7/site-packages/vdsm/api/vdsmapi.py
>> 644 /usr/bin/env python
>> vdsm-cli.noarch: W: non-conffile-in-etc /etc/bash_completion.d/vdsClient
>> vdsm-gluster.noarch: W: spelling-error %description -l en_US
>> functionalities -> functionalists, functionality, functionalist
>> vdsm-gluster.noarch: W: no-documentation
>> vdsm-hook-allocate_net.noarch: W: summary-not-capitalized C
>> random_network allocation hook for VDSM
>> vdsm-hook-allocate_net.noarch: W: spelling-error %description -l en_US
>> vms -> vs, ms, ems
>> vdsm-hook-allocate_net.noarch: W: no-documentation
>> vdsm-hook-allocate_net.noarch: E: wrong-script-interpreter
>> /usr/libexec/vdsm/hooks/before_device_create/10_allocate_net
>> /usr/bin/env python
>> vdsm-hook-checkimages.noarch: W: no-documentation
>> vdsm-hook-checkips.x86_64: W: no-documentation
>> vdsm-hook-checkips.x86_64: E: wrong-script-interpreter
>> /usr/libexec/vdsm/hooks/after_get_stats/10_checkips /usr/bin/env python
>> vdsm-hook-checkips.x86_64: E: non-executable-script
>> /usr/libexec/vdsm/hooks/after_get_stats/checkips_utils.py 644
>> /usr/bin/python2
>> vdsm-hook-diskunmap.noarch: W: spelling-error Summary(en_US) lun -> loon,
>> lung, sun
>> vdsm-hook-diskunmap.noarch: W: no-documentation
>> vdsm-hook-diskunmap.noarch: E: wrong-script-interpreter
>> /usr/libexec/vdsm/hooks/before_vm_start/50_diskunmap /usr/bin/env python2
>> vdsm-hook-ethtool-options.noarch: W: spelling-error Summary(en_US) nics
>> -> incs, mics, nicks
>> 

Re: [ovirt-devel] [ovirt-users] Cockpit oVirt support

2017-10-19 Thread Yaniv Kaul
On Thu, Oct 19, 2017 at 3:06 PM, Roy Golan  wrote:

>
>
> On Thu, 19 Oct 2017 at 14:02 Michal Skrivanek 
> wrote:
>
>>
>> > On 18 Oct 2017, at 11:42, Roy Golan  wrote:
>> >
>> >
>> >
>> > On Wed, 18 Oct 2017 at 10:25 Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>> > Hi all,
>> > I’m happy to announce that we finally finished initial contribution of
>> oVirt specific support into the Cockpit management platform
>> > See below for more details
>> >
>> > There are only limited amount of operations you can do at the moment,
>> but it may already be interesting for troubleshooting and simple admin
>> actions where you don’t want to launch the full blown webadmin UI
>> >
>> > Worth noting that if you were ever intimidated by the complexity of the
>> GWT UI of oVirt portals and it held you back from contributing, please take
>> another look!
>> >
>> > Thanks,
>> > michal
>> >
>> >
>> > Congrats Michal, Marek and team, this is very nice! The unified look &
>> feel is such a powerful thing (I didn't realize for a while that you left
>> webadmin).
>>
>> and thanks to this[1] it’s going to be even more seamless when you click
>> in Host view on Host Console button
>>
>>
> +1
> So why won't we integrate that as an optional tab using a ui plugin?
>

I don't think Cockpit looks so good crammed into a tab.
We used to have it in a subtab, which was unusable.
Y.


>
> [1] https://github.com/mareklibra/ovirt-cockpit-sso
>>
>> >> Begin forwarded message:
>> >>
>> >> From: Marek Libra 
>> >> Subject: Re: Cockpit 153 released
>> >> Date: 17 October 2017 at 16:02:59 GMT+2
>> >> To: Development discussion for the Cockpit Project <
>> cockpit-de...@lists.fedorahosted.org>
>> >> Reply-To: Development discussion for the Cockpit Project <
>> cockpit-de...@lists.fedorahosted.org>
>> >>
>> >> Walk-through video for the new "oVirt Machines" page can be found
>> here: https://youtu.be/5i-kshT6c5A
>> >>
>> >> On Tue, Oct 17, 2017 at 12:08 PM, Martin Pitt 
>> wrote:
>> >> http://cockpit-project.org/blog/cockpit-153.html
>> >>
>> >> Cockpit is the modern Linux admin interface. We release regularly. Here
>> >> are the release notes from version 153.
>> >>
>> >>
>> >> Add oVirt package
>> >> -
>> >>
>> >> This version introduces the "oVirt Machines" page on Fedora for
>> controlling
>> >> oVirt virtual machine clusters.  This code was moved into Cockpit as
>> it shares
>> >> a lot of code with the existing "Machines" page, which manages virtual
>> machines
>> >> through libvirt.
>> >>
>> >> This feature is packaged in cockpit-ovirt and when installed it will
>> replace
>> >> the "Machines" page.
>> >>
>> >> Thanks to Marek Libra for working on this!
>> >>
>> >> Screenshot:
>> >>
>> >> http://cockpit-project.org/images/ovirt-overview.png
>> >>
>> >> Change: https://github.com/cockpit-project/cockpit/pull/7139
>> >>
>> >>
>> >> Packaging cleanup
>> >> -
>> >>
>> >> This release fixes a lot of small packaging issues that were spotted by
>> >> rpmlint/lintian.
>> >>
>> >> Get it
>> >> --
>> >>
>> >> You can get Cockpit here:
>> >>
>> >> http://cockpit-project.org/running.html
>> >>
>> >> Cockpit 153 is available in Fedora 27:
>> >>
>> >> https://bodhi.fedoraproject.org/updates/cockpit-153-1.fc27
>> >>
>> >> Or download the tarball here:
>> >>
>> >> https://github.com/cockpit-project/cockpit/releases/tag/153
>> >>
>> >>
>> >> Take care,
>> >>
>> >> Martin Pitt
>> >>
>> >> ___
>> >> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> >> To unsubscribe send an email to cockpit-devel-leave@lists.
>> fedorahosted.org
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> MAREK LIBRA
>> >> SENIOR SOFTWARE ENGINEER
>> >> Red Hat Czech
>> >>
>> >> ___
>> >> cockpit-devel mailing list -- cockpit-de...@lists.fedorahosted.org
>> >> To unsubscribe send an email to cockpit-devel-leave@lists.
>> fedorahosted.org
>> >
>> > ___
>> > Devel mailing list
>> > Devel@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
> ___
> Users mailing list
> us...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] qemu-kvm-tools: oVirt Node 4.2 can't be added to oVirt Engine 4.1

2017-10-16 Thread Yaniv Kaul
On Mon, Oct 16, 2017 at 5:18 PM, Sandro Bonazzola 
wrote:

> Hi,
> just had a look at
> Bug 1501761  - Add
> the additional host to the HostedEngine failed due to miss the package
> "qemu-kvm-tools"
>
> Issue is that since qemu-kvm-tools has been removed in recent qemu-kvm we
> dropped it as requirement in 4.2 so it's not included anymore in oVirt Node
> 4.2.
>

Oy :(


> - Is oVirt Node 4.2 supposed to be backward compatible with oVirt Engine
> 4.1?
>

Definite yes.


> If so, we need to change ovirt-host-deploy in the upcoming 4.1.7 release
> in order to not fail when adding a 4.2 node.
>

Yes :(
Y.


>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
> 
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ upgrade suites failed! ] [ 15/10/17 ]

2017-10-16 Thread Yaniv Kaul
On Mon, Oct 16, 2017 at 10:24 AM, Yedidyah Bar David 
wrote:

> On Mon, Oct 16, 2017 at 10:21 AM, Yedidyah Bar David 
> wrote:
>
>> On Mon, Oct 16, 2017 at 9:28 AM, Daniel Belenky 
>> wrote:
>>
>>> can someone address this issue? every patch to *ovirt-engine* that is
>>> based on top of this patch is failing OST and* won't deploy to the
>>> tested repo*.
>>>
>>> On Sun, Oct 15, 2017 at 9:33 AM, Daniel Belenky 
>>> wrote:
>>>
 Hi all,
 The following tests are failing both of the upgrade suites in OST
 (upgrade_from_release and upgrade_from_prevrelease).

 *Link to console:* ovirt-master_change-queue-tester/3146/console
 
 *Link to test logs:*
 - upgrade-from-release-suit-master-el7
 
 - upgrade-from-prevrelease-suit-master-el7
 
 *Suspected patch:* https://gerrit.ovirt.org/#/c/82615/5
 *Please note that every patch that is based on top of the patch above
 was not deployed to the tested repo.*

 *Error snippet from engine setup log:*

>>>
>> Please add a direct link next time, if possible. This is it:
>>
>> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>> r/3146/artifact/exported-artifacts/upgrade-from-release-
>> suit-master-el7/test_logs/upgrade-from-release-suite-
>> master/post-001_upgrade_engine.py/lago-upgrade-from-release-
>> suite-master-engine/_var_log/ovirt-engine/setup/ovirt-
>> engine-setup-20171013222617-73f0df.log
>>
>> And a bit above the snippet below, there is:
>>
>> 2017-10-13 22:26:24,274-0400 DEBUG otopi.plugins.ovirt_engine_set
>> up.ovirt_engine.upgrade.asynctasks plugin.execute:926 execute-output:
>> ('/usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh', '-l',
>> '/var/log/ovirt-engine/setup/ovirt-engine-setup-20171013222617-73f0df.log',
>> '-u', 'engine', '-s', 'localhost', '-p', '5432', '-d', 'engine', '-q',
>> '-r', '-Z') stderr:
>>
>> /usr/share/ovirt-engine/bin/generate-pgpass.sh: line 3: 
>> /usr/share/ovirt-engine/setup/dbutils/engine-prolog.sh: No such file or 
>> directory
>>
>>
>> 2017-10-13 22:26:24,274-0400 DEBUG otopi.context context._executeMethod:143 
>> method exception
 Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in 
 _executeMethod
 method['method']()
   File 
 "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
  line 470, in _validateZombies
 self._clearZombies()
   File 
 "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/upgrade/asynctasks.py",
  line 135, in _clearZombies
 'Failed to clear zombie commands. '
 RuntimeError: Failed to clear zombie commands. Please access support in 
 attempt to resolve the problem
 2017-10-13 22:26:24,275-0400 ERROR otopi.context 
 context._executeMethod:152 Failed to execute stage 'Setup validation': 
 Failed to clear zombie commands. Please access support in attempt to 
 resolve the problem


>> With [1], taskcleaner.sh sources generate-pgpass.sh .
>>
>> generate-pgpass.sh is in ovirt-engine-tools, which in upgrade flows, is
>> not
>> yet upgraded (at the point of above failure).
>>
>> generate-pgpass.sh in 4.1 used to source engine-prolog.sh , using a path
>> relative to "$0". In master it does not, but we now upgrade and it does.
>>
>> This, in principle, is the core of the bug:
>>
>> A file, such as generate-pgpass.sh, that's supposed to be sourced
>> from some other files, should not by itself source other files
>> that are relative to "$0", because it can't know what "$0" is - it's
>> the path of the script sourcing it, not of itself.
>>
>> It seems like luckily we were not affected by this in 4.1, because
>> all of the files that sourced generate-pgpass.sh were together with
>> it in the same directory. But with [1], taskcleaner does too now,
>> and is in a different directory.
>>
>> Not sure what's the best solution:
>>
>> - revert [1] (and introduce it later on, in 4.3)
>>
>> - patch 4.1's generate-pgpass.sh and require the fixed 4.1 version
>> in 4.2 setup
>>
>> - Somehow trick everything to work together? Not sure. Seems like
>> you can't set $0.
>>
>> [1] https://gerrit.ovirt.org/82511
>>
>
> The same bug exists with unlock_entity:
>
> https://gerrit.ovirt.org/82615
>
> So we should probably revert both.
>

I tend to agree.
Doesn't mean we cannot fix this for 4.2, but let's revert for the meantime.
Y.


>
>
>>
>>
>> Regards,
>>
>>
>>> --

 DANIEL BELENKY

 RHV DEVOPS

Re: [ovirt-devel] O-S-T failing on repository issues?

2017-10-15 Thread Yaniv Kaul
On Sun, Oct 15, 2017 at 11:47 AM, Barak Korren <bkor...@redhat.com> wrote:

>
>
> On 15 October 2017 at 11:29, Yaniv Kaul <yk...@redhat.com> wrote:
>
>>
>> On Oct 15, 2017 11:27 AM, "Eyal Edri" <ee...@redhat.com> wrote:
>>
>>
>>
>> On Fri, Oct 6, 2017 at 11:49 AM, Barak Korren <bkor...@redhat.com> wrote:
>>
>>> 'base' is the CentOS base repo that is preconfigured by the disro
>>> installer.
>>>
>>> This is because we never finished good old:
>>> https://ovirt-jira.atlassian.net/browse/OVIRT-1280
>>>
>>
>> The good news is that Daniel was able to make some progress on fixing
>> this during last week, so hopefully we'll able to disable external repos
>> soon and get rid of errors from external repos completely.
>>
>>
>> But packages get updated here and there. How do we keep the internal repo
>> up-to-date?
>>
>
> Not sure I understand the question, the internal OST repo gets recreated
> every time it runs.
>

It is created from the external resources (defined in reposync), so if
there's a new dep or anything (a packages moved from 'base' to 'updates' -
or the other way around), it'll fail (this is why we've re-enabled the
external repos - since we've failed to keep up with the changes).
Y.


>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] O-S-T failing on repository issues?

2017-10-15 Thread Yaniv Kaul
On Oct 15, 2017 11:27 AM, "Eyal Edri" <ee...@redhat.com> wrote:



On Fri, Oct 6, 2017 at 11:49 AM, Barak Korren <bkor...@redhat.com> wrote:

> 'base' is the CentOS base repo that is preconfigured by the disro
> installer.
>
> This is because we never finished good old:
> https://ovirt-jira.atlassian.net/browse/OVIRT-1280
>

The good news is that Daniel was able to make some progress on fixing this
during last week, so hopefully we'll able to disable external repos
soon and get rid of errors from external repos completely.


But packages get updated here and there. How do we keep the internal repo
up-to-date?
Y.



>
>
> This kind of failure is to be expected once or twice on a weekly basis.
>
> On 5 October 2017 at 15:24, Yaniv Kaul <yk...@redhat.com> wrote:
>
>> See[1]:
>> Cannot find a valid baseurl for repo: base/7/x86_64\
>>
>>
>>
>> Y.
>>
>> [1] http://jenkins.ovirt.org/job/ovirt-system-tests_master_c
>> heck-patch-el7-x86_64/1930/testReport/junit/(root)/003_00_me
>> trics_bootstrap/configure_metrics/
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ 12-10-2017 ] [003_00_metrics_bootstrap.configure_metrics ]

2017-10-15 Thread Yaniv Kaul
On Sun, Oct 15, 2017 at 10:15 AM, Barak Korren  wrote:

>
>
> On 13 October 2017 at 14:31, Sandro Bonazzola  wrote:
>
>>
>> 2017-10-13 11:29 GMT+02:00 Dafna Ron :
>>
>>> Adding Eyal, Barak and Sandro and removing Victor.
>>>
>>> Personally, I do not mind taking on this task, but I think that I would
>>> need help in creating such a task.
>>>
>>
>> I think we are already mirroring everything and refreshing the mirror
>> every few hours.
>> Issue looks like we are not using them in some jobs.
>>
>
>
> I did not look deeply into this particular issue, but just to get everyone
> on the same page.
>
> * We have mirror system that is getting synced every 8 hours and has no
> known issues
>   ATM
> * All repo issues you're seeing in OST are due to one of the following two
> reasons:
>   1. We are white-listing packages into the OST environment and the list
> needs to
>  be maintained as package dependencies change
>   2. The OST VMs are not blocked from using the upstream CentOS
> repos/mirrors. And
>  the upstream repos are not updated in an atomic fashion
>
> We have ongoing work [1] to fix issue #2 above, it takes time because it
> requires meticulous work to get all the required things into the whitelist.
>

Since I send here and there patches to do this meticulous work, I know it's
not such a big deal.
Yes, it's annoying and I have not yet come up with an automated way to do
it (I'm sure there is!), but it takes few hours and we can do it once every
2 weeks or so.
It also has the nice benefit of reducing run time, sometimes dramatically.
Y.


>
> BTW when you see these issues in OST that are doe to upstream CentOS repos
> not being updated atomically, it usually correlates with a similar failure
> in the mirror sync job.
>
> [1]: https://ovirt-jira.atlassian.net/browse/OVIRT-1280
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ 12-10-2017 ] [003_00_metrics_bootstrap.configure_metrics ]

2017-10-12 Thread Yaniv Kaul
On Thu, Oct 12, 2017 at 8:51 PM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Thu, Oct 12, 2017 at 2:36 PM, Dafna Ron <d...@redhat.com> wrote:
>
>> Thank you.
>> I was not sure if this is repo related since I could not see a specific
>> package that it failed on.
>> Sandro believes this might have been a repo outage so I am opening a
>> ticket to try and investigate this issue and find the root cause.
>>
>> https://ovirt-jira.atlassian.net/browse/OVIRT-1693
>
>
> We have far too many repo outages.
> I believe it could be partially solved by properly and consistently
> keeping the reposync up-to-date.
> It's far from bullet-proof, and is annoying work, but we need to once
> every other week or so to just do it, to ensure we can perform offline
> installation (I don't believe it's a complete repo outage, but partial,
> which is why I think it'll help).
>

Spoke too soon:
yum.Errors.NoMoreMirrorsRepoError: failure: repodata/repomd.xml from
centos-ovirt-4.2-el7: [Errno 256] No more mirrors to try.
http://cbs.centos.org/repos/virt7-ovirt-42-testing/x86_64/os/repodata/repomd.xml:
[Errno 14] HTTP Error 404 - Not Found


Perhaps we are not using mirror links properly.
Y.



> Y.
>
>
>>
>>
>> Thanks,
>> Dafna
>>
>>
>> On 10/12/2017 11:37 AM, Viktor Mihajlovski wrote:
>> > On 12.10.2017 12:27, Yaniv Kaul wrote:
>> >> Repo issues (again?)
>> >> See log[1].
>> >> Y.
>> >>
>> >> [1]
>> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-teste
>> r/3104/artifact/exported-artifacts/basic-suit-master-el7/
>> 003_00_metrics_bootstrap.py.junit.xml
>> >>
>> >> On Thu, Oct 12, 2017 at 12:26 PM, Dafna Ron <d...@redhat.com> wrote:
>> >>
>> > I agree, seems to be unrelated to my patch.
>> >
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ 12-10-2017 ] [003_00_metrics_bootstrap.configure_metrics ]

2017-10-12 Thread Yaniv Kaul
On Thu, Oct 12, 2017 at 2:36 PM, Dafna Ron <d...@redhat.com> wrote:

> Thank you.
> I was not sure if this is repo related since I could not see a specific
> package that it failed on.
> Sandro believes this might have been a repo outage so I am opening a
> ticket to try and investigate this issue and find the root cause.
>
> https://ovirt-jira.atlassian.net/browse/OVIRT-1693


We have far too many repo outages.
I believe it could be partially solved by properly and consistently keeping
the reposync up-to-date.
It's far from bullet-proof, and is annoying work, but we need to once every
other week or so to just do it, to ensure we can perform offline
installation (I don't believe it's a complete repo outage, but partial,
which is why I think it'll help).
Y.


>
>
> Thanks,
> Dafna
>
>
> On 10/12/2017 11:37 AM, Viktor Mihajlovski wrote:
> > On 12.10.2017 12:27, Yaniv Kaul wrote:
> >> Repo issues (again?)
> >> See log[1].
> >> Y.
> >>
> >> [1]
> >> http://jenkins.ovirt.org/job/ovirt-master_change-queue-
> tester/3104/artifact/exported-artifacts/basic-suit-master-
> el7/003_00_metrics_bootstrap.py.junit.xml
> >>
> >> On Thu, Oct 12, 2017 at 12:26 PM, Dafna Ron <d...@redhat.com> wrote:
> >>
> > I agree, seems to be unrelated to my patch.
> >
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt Master ] [ 12-10-2017 ] [003_00_metrics_bootstrap.configure_metrics ]

2017-10-12 Thread Yaniv Kaul
Repo issues (again?)
See log[1].
Y.

[1]
http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3104/artifact/exported-artifacts/basic-suit-master-el7/003_00_metrics_bootstrap.py.junit.xml

On Thu, Oct 12, 2017 at 12:26 PM, Dafna Ron  wrote:

> Hi,
>
> We had a failure to configure metrics in ovirt-system-tests which caused
> metrics_bootstrap to fail.
>
> The patch that was reported as the cause is below.
>
> *Link to suspected patches: https://gerrit.ovirt.org/#/c/82686/
> *
>
>
>
> * Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3104/
>  Link
> to all logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3104/artifact/
> 
> (Relevant) error snippet from the log:  *
>
>  File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
> testMethod()
>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
> self.test(*self.arg)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 129, in 
> wrapped_test
> test()
>   File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in 
> wrapper
> return func(get_test_prefix(), *args, **kwargs)
>   File 
> "/home/jenkins/workspace/ovirt-master_change-queue-tester/ovirt-system-tests/basic-suite-master/test-scenarios/003_00_metrics_bootstrap.py",
>  line 53, in configure_metrics
> ' Exit code is %s' % result.code
>   File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line 29, in 
> eq_
> raise AssertionError(msg or "%r != %r" % (a, b))
> 'Configuring ovirt machines for metrics failed. Exit code is 2\n--
>
> **
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] O-S-T failing on repository issues?

2017-10-05 Thread Yaniv Kaul
See[1]:
Cannot find a valid baseurl for repo: base/7/x86_64\



Y.

[1]
http://jenkins.ovirt.org/job/ovirt-system-tests_master_check-patch-el7-x86_64/1930/testReport/junit/(root)/003_00_metrics_bootstrap/configure_metrics/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] OST Failure he-basic

2017-10-02 Thread Yaniv Kaul
On Mon, Oct 2, 2017 at 6:05 PM, Dominik Holler  wrote:

> Hi all,
> OST he-basic is currently failing [1] during setup "hosted-engine"
> during Connecting Storage Pool with the message
> Failed to execute stage 'Misc configuration': 'functools.partial' object
> has no attribute 'getAllTasksStatuses'
>
> Since the lago-he-basic-suite-master-engine VM is inaccessible after
> the problem, I have no idea how to locate the problem.
>
> Dominik
>
>
> [1]
>   http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/
> ovirt-system-tests_manual/1279/console


The installation log[1] points to [2] as the main candidate for introducing
this regression.
Y.

[1]
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/1279/artifact/exported-artifacts/test_logs/he-basic-suite-master/post-002_bootstrap.py/lago-he-basic-suite-master-host0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20171002102822-mf4zhc.log
[2]
https://gerrit.ovirt.org/#/c/78082/19/ovirt_hosted_engine_ha/lib/heconflib.py


>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 01-10-2017 ] [ change-queue-tester ]

2017-10-01 Thread Yaniv Kaul
On Sun, Oct 1, 2017 at 6:03 PM, Eyal Edri <ee...@redhat.com> wrote:

>
>
> On Sun, Oct 1, 2017 at 5:17 PM, Eyal Edri <ee...@redhat.com> wrote:
>
>>
>>
>> On Sun, Oct 1, 2017 at 4:58 PM, Yaniv Kaul <yk...@redhat.com> wrote:
>>
>>>
>>>
>>> On Oct 1, 2017 2:29 PM, "Shlomo Ben David" <sbend...@redhat.com> wrote:
>>>
>>> Hi,
>>>
>>> Link to suspected patches: https://gerrit.ovirt.org/#/c/82350/2
>>> Link to Job: http://jenkins.ovirt.org/job/ovirt-master_change-queue-
>>> tester/2949/
>>>
>>> Link to all logs: http://jenkins.ovirt.org/job/ovirt-master_change-queue
>>> -tester/2949/artifact/exported-artifacts/basic-suit-master-el7/
>>>
>>>
>>> Wasn't it already reported? Is it after the last revert or not?
>>>
>>
>> It was reported but still failing, this was from 5 hours ago.
>> Can you point to the revert patch? we can check when it was merged and if
>> it passed.
>>
>
https://gerrit.ovirt.org/#/q/status:merged+project:ovirt-engine+branch:master+topic:revert-lm-retry

Y.


>
>
> And another one from 3.5 hours ago, still failing disk operations:
>
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2952/
>
>
>>
>>
>>> Y.
>>>
>>>
>>> (Relevant) error snippet from the log:
>>>
>>> 
>>>
>>> lago.utils: ERROR: Error while running thread Traceback (most recent
>>> call last): File "/usr/lib/python2.7/site-packages/lago/utils.py", line
>>> 58, in _ret_via_queue queue.put({'return': func()}) File
>>> "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 59, in
>>> wrapper return func(get_test_prefix(), *args, **kwargs) File
>>> "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 78, in
>>> wrapper prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
>>> File "/home/jenkins/workspace/ovirt-master_change-queue-tester@2/
>>> ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
>>> line 372, in live_storage_migration disk_service.get().status,
>>> types.DiskStatus.OK) File 
>>> "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py",
>>> line 258, in assert_equals_within_long func, value, LONG_TIMEOUT,
>>> allowed_exceptions=allowed_exceptions File
>>> "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 237, in
>>> assert_equals_within '%s != %s after %s seconds' % (res, value, timeout)
>>> AssertionError: False != ok after 600 seconds
>>>
>>> 
>>>
>>>
>>> Best Regards,
>>>
>>> Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
>>> RHCSA  | RHCE | RHCVA
>>> IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)
>>>
>>> OPEN SOURCE - 1 4 011 && 011 4 1
>>>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> ASSOCIATE MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R
>>
>>
>> Red Hat EMEA <https://www.redhat.com/>
>> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 01-10-2017 ] [ change-queue-tester ]

2017-10-01 Thread Yaniv Kaul
On Oct 1, 2017 2:29 PM, "Shlomo Ben David"  wrote:

Hi,

Link to suspected patches: https://gerrit.ovirt.org/#/c/82350/2
Link to Job: http://jenkins.ovirt.org/job/ovirt-master_change-queue-
tester/2949/

Link to all logs: http://jenkins.ovirt.org/job/ovirt-master_change-
queue-tester/2949/artifact/exported-artifacts/basic-suit-master-el7/


Wasn't it already reported? Is it after the last revert or not?
Y.


(Relevant) error snippet from the log:



lago.utils: ERROR: Error while running thread Traceback (most recent call
last): File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue queue.put({'return': func()}) File "/usr/lib/python2.7/site-
packages/ovirtlago/testlib.py", line 59, in wrapper return
func(get_test_prefix(), *args, **kwargs) File "/usr/lib/python2.7/site-
packages/ovirtlago/testlib.py", line 78, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs File
"/home/jenkins/workspace/ovirt-master_change-queue-tester@2
/ovirt-system-tests/basic-suite-master/test-scenarios/004_basic_sanity.py",
line 372, in live_storage_migration disk_service.get().status,
types.DiskStatus.OK) File "/usr/lib/python2.7/site-
packages/ovirtlago/testlib.py", line 258, in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions File
"/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 237, in
assert_equals_within '%s != %s after %s seconds' % (res, value, timeout)
AssertionError: False != ok after 600 seconds




Best Regards,

Shlomi Ben-David | Software Engineer | Red Hat ISRAEL
RHCSA  | RHCE | RHCVA
IRC: shlomibendavid (on #rhev-integ, #rhev-dev, #rhev-ci)

OPEN SOURCE - 1 4 011 && 011 4 1


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt master ] [ 28-09-2017 ] [ 003_00_metrics_bootstrap.configure_metrics ]

2017-09-29 Thread Yaniv Kaul
On Fri, Sep 29, 2017 at 1:19 PM, Dafna Ron  wrote:

> Hi,
>
> We had failure in OST last night which seems to be related to two fluentd
> configuration patchs.
>
> I found that the error is actually in the deployment and can be found in
> the lago log.
>
> I am not sure it's related but I am also seeing unexpected errors in the
> vdsm log for getDeviceList: http://pastebin.test.redhat.com/520514
>
> We apparently calling getDeviceList + checkStatus=true with no guid.
>
>
> *Link to suspected patches: *
>
> * https://gerrit.ovirt.org/#/c/82329/
>  *
>
> * https://gerrit.ovirt.org/#/c/81551/
> *
>

Please revert https://gerrit.ovirt.org/#/c/82329/ and see if it helps.
You could fix it, but I don't know what is the right value for the num. of
threads needed.
Y.


>
>
> * Link to Job:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2932/
> 
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2933/
>  Link
> to all logs:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2932/artifact/
> *
>
> *http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2933/artifact/
> *
>
>
> *(Relevant) error snippet from the log: *
>
>
>
>
>
>
>
>
> *  TASK [ovirt_fluentd/client : Install non-ssl client]
> *** fatal: [lago-basic-suite-master-host-1]:
> FAILED! => {"changed": false, "failed": true, "msg":
> "AnsibleUndefinedVariable: 'fluentd_num_threads' is undefined"} fatal:
> [lago-basic-suite-master-host-0]: FAILED! => {"changed": false, "failed":
> true, "msg": "AnsibleUndefinedVariable: 'fluentd_num_threads' is
> undefined"} fatal: [localhost]: FAILED! => {"changed": false, "failed":
> true, "msg": "AnsibleUndefinedVariable: 'fluentd_num_threads' is
> undefined"}  *
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-announce] [ovirt-users] [ANN] oVirt 4.2.0 First Alpha Release is now available for testing

2017-09-29 Thread Yaniv Kaul
On Sep 28, 2017 6:23 PM, "Gianluca Cecchi" 
wrote:

On Thu, Sep 28, 2017 at 5:06 PM, Sandro Bonazzola 
wrote:

> The oVirt Project is pleased to announce the availability of the First
> Alpha Release of oVirt 4.2.0, as of September 28th, 2017
>
>
>
Good news!
Any chance of having ISO and Export domains on storage types that are not
NFS in 4.2?


Unlikely for the export domain, which we'd like to deprecate its use.
The ISO domain will work on, not sure for 4.2.
Y.



___
Announce mailing list
annou...@ovirt.org
http://lists.ovirt.org/mailman/listinfo/announce
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ansible Help Requested

2017-09-27 Thread Yaniv Kaul
On Wed, Sep 27, 2017 at 5:15 PM, Phillip Bailey  wrote:

> Hi all,
>
> I'm working on the new Cockpit hosted engine wizard and could use some
> input from all of you. One goal of this project is to move away from
> reliance on the existing OTOPI-based tools and towards an ansible-based
> approach.
>
> The items below are things we'd like to do using ansible, if possible. If
> any of you have existing plays or suggestions for the best way to use
> ansible to solve these problems, please let me know.
>
> Thanks!
>
>1. Verify provided storage settings for all allowable storage types.
>
>
I'm not sure I understand the requirement here.


>
>1.
>2. Verify compatibility of selected CPU type for the engine VM with
>the host's CPU
>
>
This is easy. See [1] for a simple Python code. In bash:
virsh -r capabilities |grep -m 1 ""

Y.
[1]
https://github.com/lago-project/lago/pull/323/files#diff-1b557418847a737490a95d4841c6a362R96

-Phillip Bailey
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] what provides cockpit-dashboard and cockpit-storaged?

2017-09-27 Thread Yaniv Kaul
On Wed, Sep 27, 2017 at 12:50 PM, Dan Kenigsberg  wrote:

> When trying to install ovirt-host-4.2 from
> ovirt-release-master-4.2.0-0.5.master.2017092721.
> git88364d1.el7.centos.noarch
> on a RHEL-7.4 host we get
>
> Error: Package:
> ovirt-host-4.2.0-0.0.master.20170913090858.git0bfa7ab.el7.centos.noarch
> (ovirt-master-snapshot)
>Requires: cockpit-dashboard
> Error: Package:
> cockpit-ovirt-dashboard-0.11.1-0.0.master.el7.centos.noarch
> (ovirt-master-snapshot)
>Requires: cockpit-storaged
>
> Where these packages are to be pulled from?
>

CentOS extras:
http://mirror.centos.org/centos/7/extras/x86_64/


(which you don't seem to have below)
Y.


>
> # yum repolist | sed 's/ .*//'
> Loaded
> repo
> RHEL-7.4
> centos-opstools-release/x86_64
> centos-ovirt-common-testing/x86_64
> centos-ovirt42-testing/x86_64
> centos-qemu-ev-release/x86_64
> centos-sclo-rh-release/x86_64
> opstools7-common-testing/x86_64
> opstools7-fluentd-012-testing/x86_64
> opstools7-perfmon-common-testing/x86_64
> ovirt-master-centos-gluster312/x86_64
> *ovirt-master-epel/x86_64
> ovirt-master-snapshot/7Server
> ovirt-master-snapshot-static/7Server
> ovirt-master-virtio-win-latest
> rhel-7-server-optional-rpms/7Server/x86_64
> rhel-7-server-rpms/7Server/x86_64
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [vdsm] s390 draft patches submitted for review

2017-09-25 Thread Yaniv Kaul
On Sep 25, 2017 3:12 PM, "Viktor Mihajlovski" 
wrote:

On 24.09.2017 09:39, Yedidyah Bar David wrote:
[...]
>
> Are you sure about "DNF version 3"? "bin/dnf-3" is "dnf using python3".
>
> For dnf-2 support, we have:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1455452
>
> Currently targetted to oVirt 4.3.
>
> See also:
>
> http://lists.ovirt.org/pipermail/devel/2017-August/030990.html
you're right ... it's v2
>
> Bottom line: If you *know what you are doing*, and want to play
> with fedora, that's fine. Otherwise, you should consider it broken
> and use EL7.
>
Yeah, Michal mentioned that. At this point in time I can find my way
through, so that's OK for development.


And a naive question - you do test it on a real s390? Because I don't
believe many have it - certainly oVirt CI do not have one (does CentOS CI
have one?).
Y.

[...]

--

Mit freundlichen Grüßen/Kind Regards
   Viktor Mihajlovski

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt HE 4.1 ] [ 24.09.17 ] [ set_dc_quota_audit ]

2017-09-24 Thread Yaniv Kaul
On Sun, Sep 24, 2017 at 7:14 PM, Gal Ben Haim  wrote:

> Hi All,
>
> In the last days HE 4.1 suite is failing with the following error:
>
> *RequestError:*
>
> status: 409
>
> reason: Conflict
>
> detail: Cannot migrate MACs to another MAC pool, because that action would
> create duplicates in target MAC pool, which are not allowed. Problematic
> MACs are 54:52:c0:a8:c
> 8:63
>

I've already sent an email about it last week - Dan?
Y.


>
> The MAC above belongs to the HE VM.
> HE-master works fine.
>
> Can you please take a look?
>
> *Link to Job:* http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-
> suite-4.1/
>
> *Link to all logs*: http://jenkins.ovirt.org/job/
> ovirt-system-tests_he-basic-suite-4.1/42/artifact/exported-artifacts/
>
>
> --
> *GAL bEN HAIM*
> RHV DEVOPS
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] New(?) regression in 4.1 HE ovirt-system-tests

2017-09-22 Thread Yaniv Kaul
On Fri, Sep 22, 2017 at 11:53 PM, Yaniv Kaul <yk...@redhat.com> wrote:

> Test: 'set_dc_quota_audit' fails with:
>
> detail: Cannot migrate MACs to another MAC pool, because that action would
> create duplicates in target MAC pool, which are not allowed. Problematic
> MACs are 54:52:c0:a8:c8:63
>
> I'm now testing on Master to see if the same thing happens there.
>

Master seems OK.
Y.


>
> The MAC above belongs to the HE VM. Perhaps I should pick one outside the
> range?
> Y.
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] New(?) regression in 4.1 HE ovirt-system-tests

2017-09-22 Thread Yaniv Kaul
Test: 'set_dc_quota_audit' fails with:

detail: Cannot migrate MACs to another MAC pool, because that action would
create duplicates in target MAC pool, which are not allowed. Problematic
MACs are 54:52:c0:a8:c8:63

I'm now testing on Master to see if the same thing happens there.

The MAC above belongs to the HE VM. Perhaps I should pick one outside the
range?
Y.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Why do we recommend to send a patch initially as draft?

2017-09-18 Thread Yaniv Kaul
Here[1]:
"Anyone can send a patch
Initially a patch should be sent as draft"

A draft is hidden from the public, why is it better to send as such?
TIA,
Y.

[1] https://www.ovirt.org/develop/dev-process/working-with-gerrit/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Fwd: CellTable replaced

2017-09-18 Thread Yaniv Kaul
On Sun, Sep 17, 2017 at 11:48 AM, Yaniv Kaul <yk...@redhat.com> wrote:

>
>
> On Sun, Sep 17, 2017 at 11:38 AM, Shani Leviim <slev...@redhat.com> wrote:
>
>> Hi,
>> It seems that after changing the data grid on Storage > Disks > Export,
>> the table's size doesn't fit to its content.
>> Also, cells alignment seem to be moved.
>> I've attached "before" and "after" screenshots for visualizing.
>>
>> Same behavior for Storage > Storage Domains > choose am image > Import.
>>
>>
> We should also strive not to have the horizontal bar for it. It should fit
> in.
> Y.
>

Attached please find an example of a refresh issue - happens when there's a
single object in the table.
Auto-fixes itself in the next refresh.
Y.


>
> Thanks!
>>
>>
>> *Regards,*
>>
>> *Shani Leviim*
>>
>> On Fri, Sep 15, 2017 at 9:02 AM, Gobinda Das <go...@redhat.com> wrote:
>>
>>> Hi,
>>>   The 2nd issue i.e. 'For Import Domain "Data Center" field is not
>>> populating' was my setup issue.The Data Center was not Active because of no
>>> Master Domain.
>>>
>>> This one required  fix:
>>> 1- While creating Volume: Add Brick(Bricks are not showing in table
>>> after adding).
>>>
>>>
>>> Attached screenshots as well.
>>>
>>> On Fri, Sep 15, 2017 at 11:08 AM, Gobinda Das <go...@redhat.com> wrote:
>>>
>>>> Adding devel group
>>>>
>>>> On Thu, Sep 14, 2017 at 11:25 PM, Gobinda Das <go...@redhat.com> wrote:
>>>>
>>>>> Hi Sahina/Doron,
>>>>>
>>>>>  I found two issues as below:
>>>>> 1- While creating Volume: Add Brick(Bricks are not showing in table
>>>>> after adding).
>>>>> 2- For Import Domain "Data Center" field is not populating.
>>>>>
>>>>> Attached screenshots as well.
>>>>>
>>>>>
>>>>> On Wed, Sep 13, 2017 at 5:47 PM, Sahina Bose <sab...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>> Adding Gobinda to take a look
>>>>>>
>>>>>> On Wed, Sep 13, 2017 at 3:25 PM, Phillip Bailey <phbai...@redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Sure. I'll take a look. =)
>>>>>>>
>>>>>>> On Sep 13, 2017 5:51 AM, "Doron Fediuck" <dfedi...@redhat.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Phillip- care to do a quick review for SLA related areas?
>>>>>>>>
>>>>>>>> Sahina- who's looking at RHGS areas?
>>>>>>>>
>>>>>>>> -- Forwarded message --
>>>>>>>> From: Alexander Wels <aw...@redhat.com>
>>>>>>>> Date: 12 September 2017 at 18:56
>>>>>>>> Subject: [ovirt-devel] CellTable replaced
>>>>>>>> To: devel@ovirt.org
>>>>>>>>
>>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I just merged a patch [1] that replaces most uses of the GWT
>>>>>>>> CellTable with a
>>>>>>>> DataGrid. The DataGrid has many benefits compared to a CellTable
>>>>>>>> the biggest
>>>>>>>> being it has its own header vs the fake header we created to allow
>>>>>>>> the user to
>>>>>>>> scroll their data. This also solves the issue where the header
>>>>>>>> would scroll of
>>>>>>>> the top of the screen while you were scrolling. It also has one
>>>>>>>> issue, each
>>>>>>>> table needs a height specified (so it can determine where to show
>>>>>>>> the
>>>>>>>> scrolling portion of the table).
>>>>>>>>
>>>>>>>> I have worked hard to make sure all the replaced tables have their
>>>>>>>> height
>>>>>>>> specified properly, but there is a chance I missed some. I would
>>>>>>>> like everyone
>>>>>>>> to let me know if their table suddenly disappears (at least the
>>>>>>>> content), so I
>>>>>>>> can fix it for you. The highest likely places I missed are in
>>>>>>>> dialogs I don't
>>>>>>>> know about.
>>>>>>>>
>>>>>>>> Alexander
>>>>>>>> ___
>>>>>>>> Devel mailing list
>>>>>>>> Devel@ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks,
>>>>> Gobinda
>>>>> +91-9019047912 <+91%2090190%2047912>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Thanks,
>>>> Gobinda
>>>> +91-9019047912 <+91%2090190%2047912>
>>>>
>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> +91-9019047912 <+91%2090190%2047912>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt HE master ] [ 17/09/17 ] [ engine-setup ]

2017-09-17 Thread Yaniv Kaul
On Sun, Sep 17, 2017 at 11:47 AM, Eyal Edri  wrote:

> Hi,
>
> It looks like HE suite ( both 'master' and '4.1' ) is failing constantly,
> most likely due to 7.4 updates.
> So there is no suspected patch from oVirt side that might have caused it.
>

It's the firewall. I've fixed it[1] and specifically[2] but probably not
completely.

Perhaps we should try to take[2] separately.
Y.

[1] https://gerrit.ovirt.org/#/c/81766/
[2]
https://gerrit.ovirt.org/#/c/81766/3/common/deploy-scripts/setup_storage_unified_he_extra_el7.sh




> It is probably also the reason why HC suites are failing, since they are
> using also HE for deployments.
>
> I think this issue should BLOCK the Alpha release tomorrow, or at the
> minimum, we need to verify its an OST issue and not a real regression.
>
> Links to relevant failures:
> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-
> suite-master/37/consoleFull
> http://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-
> suite-4.1/33/console
>
> Error snippet:
>
> 03:01:38
> 03:01:38   --== STORAGE CONFIGURATION ==--
> 03:01:38
> 03:02:47 [ ERROR ] Error while mounting specified storage path: mount.nfs:
> No route to host
> 03:02:58 [WARNING] Cannot unmount /tmp/tmp2gkFwJ
> 03:02:58 [ ERROR ] Failed to execute stage 'Environment customization':
> mount.nfs: No route to host
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

  1   2   3   4   >