[ovirt-devel] ovirt-system-tests

2023-02-08 Thread stephan . duehr
Hi,

about two years ago, I used OST to setup a oVirt test/dev environment.

Now I tried again, following  
https://github.com/oVirt/ovirt-system-tests/blob/master/README.md
but it fails because https://templates.ovirt.org/yum/ is unavailable.

Or isn't it meant to be available for public access any more?

Regards,
Stephan
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FMAMIJ67ZQI6LIMPSXUBTEGNYEGP4XKJ/


[ovirt-devel] ovirt-system-tests hosted-engine suites and CentOS Stream - current status

2021-03-18 Thread Yedidyah Bar David
Hi all,

Right now, ovirt appliance is built nightly, and publishes (to
ovirt-master-snapshot) two packages: ovirt-engine-appliance (the
"regular" one, based on CentOS Linux) and
ovirt-engine-appliance-centos-stream (based on CentOS Stream).

This can already be used by anyone that wants to try this manually.

For trying in CI:

1. A pending patch to ost-images [1] can be merged once it passes the
manual build I ran (link in a gerrit comment there). Should take a few
more hours. This patch changes the existing image that was generated,
called ost-images-el8stream-he-installed, to include a stream
appliance (from current repos).

2. A patch for OST [2] can be used, as needed, for manual runs. I
updated it now to expect the output of [1] once it's merged, so should
probably be rebased and/or 'ci test/ci build' once [1] is merged (and
image is published).

TODO:

1. Decide how we want to continue, going forward. If we aim at
complete move to Stream in 4.4.6, perhaps now is the time to start...
An alternative is to somehow support both in parallel - will be more
complex, obviously.

2. Handle ovirt-node

Best regards,

[1] https://gerrit.ovirt.org/c/ost-images/+/113633

[2] https://gerrit.ovirt.org/c/ovirt-system-tests/+/112977
-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/E4XSDRJ544LI42I7C2254NLG36TPL7MB/


[ovirt-devel] ovirt-system-tests: engine fqdn, dns domain name, etc

2020-12-07 Thread Yedidyah Bar David
Hi all,

Current [1] is broken - it misses the domain name.

After spending several hours on this last few days, I decided to give
up for now and just fix by appending hard-coded 'lago.local'. I am
testing locally, and if it passes, I'll push to gerrit.

Going forward, I'd like to have as few as possible guesses and
hard-coded strings that must match:

- Engine fqdn should be hard-coded, arbitrary (but not "engine", as
explained), ideally in a single place, propagated from there as needed

- storage server (used in HE suites) should also be hard-coded and not
guessed from the engine machine name like now

- Same for domain name ("lago.local")

- Since it's hard to read lago init file and parse it to get stuff
from it without actually using lago (and even when using it, e.g. I
failed to get dns domain name), we should decide on these things
before starting lago and inject them into the lagoinitfile (like we do
with other stuff via env).

- Also, and to better (IMO) match what I think should happen in
reality (for real setups), it should all be done on DNS level (dnsmasq
of libvirt, controlled by lago, AFAIU), and we should stop playing
with /etc/hosts.

If there is agreement, I might decide to work on this. Or we can at
least open a bug/ticket somewhere :-).

For some of my thoughts/attempts along the way, you might want to see
[2][3][4]. All are WIP, none work, probably none even is worth the
time of a real review.

Thoughts/opinions/ideas are most welcome!

Thanks and best regards,

[1] https://gerrit.ovirt.org/c/ovirt-system-tests/+/112488

[2] https://github.com/didib/ovirt-system-tests/tree/ostengine

[3] https://github.com/didib/ovirt-system-tests/tree/fqdn-from-env

[4] https://github.com/didib/ovirt-system-tests/tree/engine-fqdn-env
-- 
Didi
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BF2VF2GYITR54X63T6Y42VDDACWOEIGN/


[ovirt-devel] Ovirt System Tests "invalid syntax" for yaml import

2020-01-31 Thread raymond . francis
Seeing this when i run setup of System Tests for ovirt. Is there something in 
setup im missing


2020-01-31 10:20:53,167 021036:117 INFO:  cd ovirt-system-tests && 
./run_suite.sh basic-suite-4.3
+ source common/helpers/logger.sh
+ CLI=lago
+ DO_CLEANUP=false
+ RECOMMENDED_RAM_IN_MB=8196
+ EXTRA_SOURCES=()
+ RPMS_TO_INSTALL=()
+ COVERAGE=false
++ getopt -o ho:e:n:b:cs:r:l:i --long 
help,output:,engine:,node:,boot-iso:,cleanup,images --long 
extra-rpm-source,reposync-config:,local-rpms: --long 
only-verify-requirements,ignore-requirements --long coverage -n run_suite.sh -- 
basic-suite-4.3
+ options=' -- '\''basic-suite-4.3'\'''
+ [[ 0 != \0 ]]
+ eval set -- ' -- '\''basic-suite-4.3'\'''
++ set -- -- basic-suite-4.3
+ true
+ case $1 in
+ shift
+ break
+ [[ -z basic-suite-4.3 ]]
+ export OST_REPO_ROOT=/ovirt-system-tests
+ OST_REPO_ROOT=/ovirt-system-tests
++ realpath --no-symlinks basic-suite-4.3
+ export SUITE=/ovirt-system-tests/basic-suite-4.3
+ SUITE=/ovirt-system-tests/basic-suite-4.3
+ [[ -z '' ]]
+ PREFIX=/ovirt-system-tests/deployment-basic-suite-4.3
+ export PREFIX
+ false
+ [[ -e /ovirt-system-tests/deployment-basic-suite-4.3 ]]
+ mkdir -p /ovirt-system-tests/deployment-basic-suite-4.3
+ [[ -n '' ]]
+ verify_system_requirements /ovirt-system-tests/deployment-basic-suite-4.3
+ local prefix=/ovirt-system-tests/deployment-basic-suite-4.3
+ /ovirt-system-tests/common/scripts/verify_system_requirements.py 
--prefix-path /ovirt-system-tests/deployment-basic-suite-4.3 
/ovirt-system-tests/basic-suite-4.3/vars/main.yml
Traceback (most recent call last):
  File "/ovirt-system-tests/common/scripts/verify_system_requirements.py", line 
17, in 
import yaml
  File "/usr/local/lib64/python3.6/site-packages/yaml/__init__.py", line 399
class YAMLObject(metaclass=YAMLObjectMetaclass):
  ^
SyntaxError: invalid syntax
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2ANKFO4F57YQR5XDURQYM5WLXLO2S7QQ/


[ovirt-devel] ovirt-system-tests network suite

2019-10-22 Thread Miguel Duarte de Mora Barroso
I'm trying to run OST network suite locally, and it is failing w/ the
following error [0].

The build can be found in [1].

Any guidance / clue ?

[0] - http://pastebin.test.redhat.com/807736
[1] - https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5835/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6DB3HG3W67X7ACWGJN3CJSMEYKF57NDJ/


[ovirt-devel] Ovirt System Tests: reposync fails during run_suite.sh

2019-10-16 Thread raymond . francis
I am currently setting up the Ovirt System Tests using the guide

https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html

So we get to the run_suite.sh part. This calls a python script called 
put_host_image.py which tries to create a symlink to the current repo  but hits 
an issue where the file (internal_repo/host_image)  is said to not exist at the 
time. Output below

 python /Ray/ovirt-system-tests/common/scripts/put_host_image.py 
/Ray/ovirt-system-tests/deployment-basic-suite-4.3 
/Ray/ovirt-system-tests/deployment-basic-suite-4.3/current/internal_repo/host_image
Backing file path is /var/lib/lago/store/phx_repo:el7.6-base-5:v1 dest is 
/Ray/ovirt-system-tests/deployment-basic-suite-4.3/current/internal_repo/host_image
Traceback (most recent call last):
  File "/Ray/ovirt-system-tests/common/scripts/put_host_image.py", line 66, in 

main()
  File "/Ray/ovirt-system-tests/common/scripts/put_host_image.py", line 63, in 
main
add_host_image_to_dest(prefix_path=sys.argv[1], dest=sys.argv[2])
  File "/Ray/ovirt-system-tests/common/scripts/put_host_image.py", line 55, in 
add_host_image_to_dest
symlink(backing_file_path, dest)
OSError: [Errno 2] No such file or directory
+ on_exit
+ [[ 1 -ne 0 ]]
+ logger.error 'on_exit: Exiting with a non-zero status'
+ logger.log ERROR 'on_exit: Exiting with a non-zero status'
+ set +x
2019-10-16 14:50:14.980898261+0100 run_suite.sh::on_exit::ERROR:: on_exit: 
Exiting with a non-zero status
+ logger.info 'Dumping lago env status'
+ logger.log INFO 'Dumping lago env status'
+ set +x
2019-10-16 14:50:14.986356383+0100 run_suite.sh::on_exit::INFO:: Dumping lago 
env status
+ env_status
+ cd /Ray/ovirt-system-tests/deployment-basic-suite-4.3
+ lago status
[Prefix]:
Base directory: /Ray/ovirt-system-tests/deployment-basic-suite-4.3/default
[Networks]:
[lago-basic-suite-4-3-net-bonding]:
gateway: 192.168.203.1
management: False
status: down
[lago-basic-suite-4-3-net-management]:
gateway: 192.168.205.1
management: True
status: down
[lago-basic-suite-4-3-net-storage]:
gateway: 192.168.204.1
management: False
status: down
UUID: d6b67236f01b11e988fc002128a2f0d8
[VMs]:
[lago-basic-suite-4-3-engine]:
[NICs]:
[eth0]:
ip: 192.168.205.2
network: lago-basic-suite-4-3-net-management
[eth1]:
ip: 192.168.204.2
network: lago-basic-suite-4-3-net-storage
distro: el7
[metadata]:
deploy-scripts:

$LAGO_PREFIX_PATH/scripts/_Ray_ovirt-system-tests_basic-suite-4.3_deploy-scripts_add_local_repo_no_ext_access.sh

$LAGO_PREFIX_PATH/scripts/_Ray_ovirt-system-tests_basic-suite-4.3_deploy-scripts_setup_engine.sh
ovirt-engine-password: 123
root password: 123456
status: down
[lago-basic-suite-4-3-host-0]:
[NICs]:
[eth0]:
ip: 192.168.205.4
network: lago-basic-suite-4-3-net-management
[eth1]:
ip: 192.168.204.4
network: lago-basic-suite-4-3-net-storage
[eth2]:
ip: 192.168.203.4
network: lago-basic-suite-4-3-net-bonding
[eth3]:
ip: 192.168.203.5
network: lago-basic-suite-4-3-net-bonding
distro: el7
[metadata]:
deploy-scripts:

$LAGO_PREFIX_PATH/scripts/_Ray_ovirt-system-tests_basic-suite-4.3_deploy-scripts_add_local_repo_no_ext_access.sh

$LAGO_PREFIX_PATH/scripts/_Ray_ovirt-system-tests_basic-suite-4.3_deploy-scripts_setup_host_el7.sh

$LAGO_PREFIX_PATH/scripts/_Ray_ovirt-system-tests_basic-suite-4.3_deploy-scripts_setup_1st_host_el7.sh
root password: 123456
status: down
[lago-basic-suite-4-3-host-1]:
[NICs]:
[eth0]:
ip: 192.168.205.3
network: lago-basic-suite-4-3-net-management
[eth1]:
ip: 192.168.204.3
network: lago-basic-suite-4-3-net-storage
[eth2]:
ip: 192.168.203.2
network: lago-basic-suite-4-3-net-bonding
[eth3]:
ip: 192.168.203.3
network: lago-basic-suite-4-3-net-bonding
distro: el7
[metadata]:
deploy-scripts:

$LAGO_PREFIX_PATH/scripts/_Ray_ovirt-system-tests_basic-suite-4.3_deploy-scripts_add_local_repo_no_ext_access.sh


[ovirt-devel] ovirt-system-tests hackathon report

2018-03-13 Thread Sandro Bonazzola
4 people accepted calendar invite:
- Devin A. Bougie
- Francesco Romani
- Jiri Belka
- suporte, logicworks

4 people tentatively accepted calendar invite:
- Amnon Maimon
- Andreas Bleischwitz
- Arnaud Lauriou
- Stephen Pesini

2 mailing lists accepted calendar invite: us...@ovirt.org, devel@ovirt.org
(don't ask me how) so I may have missed someone in above list


4 patches got merged:
Add check for host update to the 1st host. 
Merged Yaniv Kaul

ovirt-system-tests

master
(add_upgrade_check)

4:10
PM
basic-suite-master: add vnic_profile_mappings to register vm
 Merged Eitan Raviv

ovirt-system-tests

master
(register-template-vnic-mapping)

2:50
PM
Revert "ovirt-4.2: Skipping 002_bootstrap.update_default_cluster"
 Merged Eyal Edri

ovirt-system-tests

master

11:36
AM
seperate 4.2 tests and utils from master 
Merged Eyal Edri

ovirt-system-tests

master

11:35
AM

13 patches has been pushed / reviewed / rebased

Add gdeploy to ovirt-4.2.repo 
Daniel Belenky

ovirt-system-tests

master

4:53
PM
Cleanup of test code - next() replaced with any()

Martin Sivák

ovirt-system-tests

master

4:51
PM
Add network queues custom property and use it in the vnic profile for VM0

Yaniv Kaul

ovirt-system-tests

master
(multi_queue_config)

4:49
PM
new suite: he-basic-iscsi-suite-master 
Yuval Turgeman

ovirt-system-tests

master
(he-basic-iscsi-suite-master)

4:47
PM
Collect host-deploy bundle from the engine 
Yedidyah Bar David

ovirt-system-tests

master

4:41
PM
network-suite-master: Make openstack_client_config fixture available to all
...  Merge Conflict Marcin Mirecki

ovirt-system-tests

master

3:39
PM
new suite: he-basic-ng-ansible-suite-master 
Sandro Bonazzola

ovirt-system-tests

master
(he-basic-ng-ansible-suite-master)

3:37
PM
Enable and move additional tests to 002 
Yaniv Kaul

ovirt-system-tests

Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-19 Thread Marc Young
>
> Did you run 'lago init', and 'lago start' before running 'lago deploy'?


I missed `lago start` (its hard to find decisive commands across a single
source of documentation, but that will very likely solve my issue). I was
running

lago init
lago ovirt reposetup --reposync-yum-config reposync-config.repo

lago deploy


I think the 'demo tool' feature might be working for basic suite already as
> I mentioned, but its not official/standardize yet, we're working on it
> currently and it will have stanard
> build / deploy flow which will allow to consume it in a much easier
> fashion.
>

Admittedly most of my issues have been largely due to deciphering
commands/etc from misc docs and jenkins configs. Most things have worked
very easily after figuring out the little details, but Im happy to try out
the demo tool.

On Tue, Sep 19, 2017 at 1:41 AM, Eyal Edri  wrote:

>
>
> On Tue, Sep 19, 2017 at 9:05 AM, Barak Korren  wrote:
>
>> On 19 September 2017 at 06:35, Marc Young <3vilpeng...@gmail.com> wrote:
>> >
>> > Is there a lago command I'm missing/some sort of configuration change
>> needed
>> > that I'm missing to run the lago commands manually?
>> > I've been trying to piece together the manual commands from
>> run_suite.sh but
>> > I dont see it doing anything that I'm outright missing.
>> >
>> Did you run 'lago init', and 'lago start' before running 'lago deploy'?
>>
>> You'd probably see these SSH errors if you just run 'lago deploy'
>> after 'lago init' because the VMs are not up yet.
>>
>> You'll also need to run 'lago ovirt reposetup ...' to make OST deploy
>> work, and the parameters you need to pass it may not be trivial.
>>
>> Practically, just removing the test scenarios you don't need would
>> probably be much easier.
>>
>> As Eyal mentioned before, we're working on 'demotool' which will allow
>> you to download a full working oVirt Lago environment.
>>
>
> Adding Gal,
> I think the 'demo tool' feature might be working for basic suite already
> as I mentioned, but its not official/standardize yet, we're working on it
> currently and it will have stanard
> build / deploy flow which will allow to consume it in a much easier
> fashion.
>
> You can try it and see if it helps your need, try the following:
>
> 1. Run the manual job [1] and tick the 'create images' option on job
> parameters ( also provide link to custom yum repo / jenkins job with RPMs
> you want to test if you have any )
> 2. Once the job is done, download the tar.gz file
> 3. Follow the intructions on [2], though there might be issues since its
> not official yet, but it basically comes down to:
>
>
>- tar -xzvf $your_image.tar.gz
>- lago init ( inside the lago workdir when you opened the files )
>- lago ovirt start --with-vms
>- lago status or lago ovitrt status should show if the VMs are up
>
>
> Now you have a running oVirt env, which run all the tests in the basic
> suite, so you can try and use it or run api calls against it.
> Maybe use lago shell to connect to the one of the VMs if you need, etc...
>
>
> [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/
> job/ovirt-system-tests_manual/ ( this only helps if the suite passes w/o
> errors )
> [2] https://docs.google.com/document/d/1MZdpHAjNWlpPFhOXFfMavZbg6Kh38
> 6PRBg6jG6xslao/edit
>
>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-19 Thread Eyal Edri
On Tue, Sep 19, 2017 at 9:05 AM, Barak Korren  wrote:

> On 19 September 2017 at 06:35, Marc Young <3vilpeng...@gmail.com> wrote:
> >
> > Is there a lago command I'm missing/some sort of configuration change
> needed
> > that I'm missing to run the lago commands manually?
> > I've been trying to piece together the manual commands from run_suite.sh
> but
> > I dont see it doing anything that I'm outright missing.
> >
> Did you run 'lago init', and 'lago start' before running 'lago deploy'?
>
> You'd probably see these SSH errors if you just run 'lago deploy'
> after 'lago init' because the VMs are not up yet.
>
> You'll also need to run 'lago ovirt reposetup ...' to make OST deploy
> work, and the parameters you need to pass it may not be trivial.
>
> Practically, just removing the test scenarios you don't need would
> probably be much easier.
>
> As Eyal mentioned before, we're working on 'demotool' which will allow
> you to download a full working oVirt Lago environment.
>

Adding Gal,
I think the 'demo tool' feature might be working for basic suite already as
I mentioned, but its not official/standardize yet, we're working on it
currently and it will have stanard
build / deploy flow which will allow to consume it in a much easier fashion.

You can try it and see if it helps your need, try the following:

1. Run the manual job [1] and tick the 'create images' option on job
parameters ( also provide link to custom yum repo / jenkins job with RPMs
you want to test if you have any )
2. Once the job is done, download the tar.gz file
3. Follow the intructions on [2], though there might be issues since its
not official yet, but it basically comes down to:


   - tar -xzvf $your_image.tar.gz
   - lago init ( inside the lago workdir when you opened the files )
   - lago ovirt start --with-vms
   - lago status or lago ovitrt status should show if the VMs are up


Now you have a running oVirt env, which run all the tests in the basic
suite, so you can try and use it or run api calls against it.
Maybe use lago shell to connect to the one of the VMs if you need, etc...


[1]
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/
( this only helps if the suite passes w/o errors )
[2]
https://docs.google.com/document/d/1MZdpHAjNWlpPFhOXFfMavZbg6Kh386PRBg6jG6xslao/edit


>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>


-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-19 Thread Barak Korren
On 19 September 2017 at 06:35, Marc Young <3vilpeng...@gmail.com> wrote:
>
> Is there a lago command I'm missing/some sort of configuration change needed
> that I'm missing to run the lago commands manually?
> I've been trying to piece together the manual commands from run_suite.sh but
> I dont see it doing anything that I'm outright missing.
>
Did you run 'lago init', and 'lago start' before running 'lago deploy'?

You'd probably see these SSH errors if you just run 'lago deploy'
after 'lago init' because the VMs are not up yet.

You'll also need to run 'lago ovirt reposetup ...' to make OST deploy
work, and the parameters you need to pass it may not be trivial.

Practically, just removing the test scenarios you don't need would
probably be much easier.

As Eyal mentioned before, we're working on 'demotool' which will allow
you to download a full working oVirt Lago environment.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-18 Thread Marc Young
Im attempting to run via the lago commands re Barak's suggestion, but I'm
running into ssh timeouts (as it seems like maybe a failing deployment)

myoung at dev-vm in ~/repos/github/ovirt-system-tests/vagrant on master*
$ lago deploy
@ Deploy environment:
  # [Thread-2] Deploy VM lago-vagrant-host-0:
  # [Thread-1] Deploy VM lago-vagrant-engine:
* [Thread-2] Wait for ssh connectivity:
* [Thread-1] Wait for ssh connectivity:
* [Thread-2] Wait for ssh connectivity: ERROR (in 0:28:21)
  # [Thread-2] Deploy VM lago-vagrant-host-0: ERROR (in 0:28:21)

With no changes,  using `run_suite.sh`, however, it works fine:


...snip
/home/myoung/repos/github/ovirt-system-tests
@ Deploy oVirt environment:
  # Deploy environment:
* [Thread-2] Deploy VM lago-vagrant-engine:
* [Thread-3] Deploy VM lago-vagrant-host-0:
* [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:44)
* [Thread-2] Deploy VM lago-vagrant-engine: Success (in 0:13:16)
  # Deploy environment: Success (in 0:13:16)
@ Deploy oVirt environment: Success (in 0:13:16)
/home/myoung/repos/github/ovirt-system-tests
Running test scenario 000_check_repo_closure.py
@ Run test: 000_check_repo_closure.py:
snip


and even continues to a full suite success:

@ Collect artifacts: Success (in 0:00:04)
/home/myoung/repos/github/ovirt-system-tests
run_suite.sh::main::SUCCESS::
/home/myoung/repos/github/ovirt-system-tests/vagrant - All tests passed :)

Is there a lago command I'm missing/some sort of configuration change
needed that I'm missing to run the lago commands manually?
I've been trying to piece together the manual commands from run_suite.sh
but I dont see it doing anything that I'm outright missing.

On Sat, Sep 16, 2017 at 2:27 PM, Nadav Goldin 
wrote:

> Hi,
> Rerunning a specific python test file is possible, though it takes few
> manual steps. Take a look at [1].
>
> With regard to the debugger, it is possible to run the tests without
> 'lago ovirt runtest' commands at all, directly with Lago as a Python
> library, though it isn't fully used in OST. Basically you would have
> to export the same environment variables as described in [1], and then
> use the same imports and decorators as found in OST(mainly the
> testlib.with_ovirt_prefix decorator), with that in hand you can call
> the python file however you'd like.
>
> Of course this is all good for debugging, but less for OST(as you need
> the suite logic: log collections, order of tests, etc).
>
>
>
> [1] http://lists.ovirt.org/pipermail/lago-devel/20170402/000650.html
>
> On Thu, Sep 14, 2017 at 8:59 PM, Marc Young <3vilpeng...@gmail.com> wrote:
> > Is it possible to run a specific scenario without having to run back
> through
> > spin up/tear down?
> >
> > I want to rapidly debug a `test-scenarios/00#_something.py` and the
> > bootstrap scripts (001,002) take a really long time.
> >
> > Also is it possible to attach to a debugger within the test-scenario with
> > pdb? I didnt have luck and it looks like its abstracted away and not
> > executed as a regular python file in a way that i can get to an
> interactive
> > debugger
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-16 Thread Nadav Goldin
Hi,
Rerunning a specific python test file is possible, though it takes few
manual steps. Take a look at [1].

With regard to the debugger, it is possible to run the tests without
'lago ovirt runtest' commands at all, directly with Lago as a Python
library, though it isn't fully used in OST. Basically you would have
to export the same environment variables as described in [1], and then
use the same imports and decorators as found in OST(mainly the
testlib.with_ovirt_prefix decorator), with that in hand you can call
the python file however you'd like.

Of course this is all good for debugging, but less for OST(as you need
the suite logic: log collections, order of tests, etc).



[1] http://lists.ovirt.org/pipermail/lago-devel/20170402/000650.html

On Thu, Sep 14, 2017 at 8:59 PM, Marc Young <3vilpeng...@gmail.com> wrote:
> Is it possible to run a specific scenario without having to run back through
> spin up/tear down?
>
> I want to rapidly debug a `test-scenarios/00#_something.py` and the
> bootstrap scripts (001,002) take a really long time.
>
> Also is it possible to attach to a debugger within the test-scenario with
> pdb? I didnt have luck and it looks like its abstracted away and not
> executed as a regular python file in a way that i can get to an interactive
> debugger
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-15 Thread Barak Korren
On 14 September 2017 at 20:59, Marc Young <3vilpeng...@gmail.com> wrote:
> Is it possible to run a specific scenario without having to run back through
> spin up/tear down?

You can snapshot the environment after Lago finishes the bootstrap
stages with `lago snapshot` and then roll back to the snapshot when
you need to.

You just need to intialize the environment manually with 'lago init',
'lago ovirt reposetup ...' 'lago deploy' and then run the initial
tests one by one with 'lago ovirt runtest' instead of letting
`runsuit.sh` do it all for you.

Alternatively you could possibly remove some of the symlinks from the
'test-scenarious' directory to leave just the ones you need and then
use `runsuit.sh` to get the the desired state before snapshotting.

> I want to rapidly debug a `test-scenarios/00#_something.py` and the
> bootstrap scripts (001,002) take a really long time.
>
> Also is it possible to attach to a debugger within the test-scenario with
> pdb? I didnt have luck and it looks like its abstracted away and not
> executed as a regular python file in a way that i can get to an interactive
> debugger

'lago ovirt runtest' just runs nose, but its probably redirecting I/O
in a way that would not let you interact with a debugger. But you can
try... 'runsuit.sh' adds its own layers of redirection, so working
through it will probably be more challenging.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-15 Thread Eyal Edri
AFAIK It is not possible ATM, but we were considering adding that feature
when we move from python-nose to PyTest [1].
Hmm, maybe it will be possible using the export / import feature, but will
probably need some custom code for it.
So you'll run basic suite, get your env up and running and then export it
to a tar.gz file. next time you want to test something, you import it back
and run only the test you want.

We've been working on a project called 'ovirt demo tool' [2] which will
officially replace ovirt-live using Lago images, it is still under
development, but should work for basic suite if you want to try it.
The manual job [3] should also allow saving the run as an image as an
option [3], by setting the 'create images' option to 'yes'.


[1] https://ovirt-jira.atlassian.net/browse/OST-8
[2]
https://docs.google.com/document/d/1MZdpHAjNWlpPFhOXFfMavZbg6Kh386PRBg6jG6xslao/edit?usp=sharing
[3]
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/build?delay=0sec

On Thu, Sep 14, 2017 at 8:59 PM, Marc Young <3vilpeng...@gmail.com> wrote:

> Is it possible to run a specific scenario without having to run back
> through spin up/tear down?
>
> I want to rapidly debug a `test-scenarios/00#_something.py` and the
> bootstrap scripts (001,002) take a really long time.
>
> Also is it possible to attach to a debugger within the test-scenario with
> pdb? I didnt have luck and it looks like its abstracted away and not
> executed as a regular python file in a way that i can get to an interactive
> debugger
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] ovirt-system-tests run specific scenario/debugging

2017-09-14 Thread Marc Young
Is it possible to run a specific scenario without having to run back
through spin up/tear down?

I want to rapidly debug a `test-scenarios/00#_something.py` and the
bootstrap scripts (001,002) take a really long time.

Also is it possible to attach to a debugger within the test-scenario with
pdb? I didnt have luck and it looks like its abstracted away and not
executed as a regular python file in a way that i can get to an interactive
debugger
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-14 Thread Yedidyah Bar David
On Thu, Sep 14, 2017 at 10:38 AM, Yedidyah Bar David  wrote:
> On Thu, Sep 14, 2017 at 10:28 AM, Barak Korren  wrote:
>>
>>
>> On 14 September 2017 at 10:22, Eyal Edri  wrote:
>>>
>>>
>>>
>>> On Thu, Sep 14, 2017 at 10:19 AM, Yaniv Kaul  wrote:


 I'm almost there, I'm stuck on:
 2017-09-14 03:10:23,322-04 DEBUG
 [org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [49acf013]
 nextEvent: Log ERROR Yum [u'ERROR with transaction check vs depsolve:',
 'iptables = 1.4.21-18.0.1.el7.centos is needed by
 iptables-services-1.4.21-18.0.1.el7.centos.x86_64']
>>
>>
>> Where do you get this error? From otopi?
>
> It's likely an indirect dependency. Perhaps from ovirt-hosted-engine-setup.
>
> Can you try this patch:
>
> https://gerrit.ovirt.org/81730
>
>> The issue may be related to having an older iptables version pre-installed
>> in the image

Root cause might be similar to this one:

https://bugzilla.redhat.com/show_bug.cgi?id=1404624

>>


 2. I'm not sure why we need it - or is it required still even though
 we've moved to firewalld?
>>
>>
>> Firewalld is a front-end for iptables.
>
> The error is about iptables-services, which is the package providing
> an iptables "service".
>
>>
>>

 3. I think I'll just update iptables...
>>
>>
>> You mean in the deploy script?
>>
>>
>>
>> --
>> Barak Korren
>> RHV DevOps team , RHCE, RHCi
>> Red Hat EMEA
>> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> --
> Didi



-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-14 Thread Yedidyah Bar David
On Thu, Sep 14, 2017 at 10:28 AM, Barak Korren  wrote:
>
>
> On 14 September 2017 at 10:22, Eyal Edri  wrote:
>>
>>
>>
>> On Thu, Sep 14, 2017 at 10:19 AM, Yaniv Kaul  wrote:
>>>
>>>
>>> I'm almost there, I'm stuck on:
>>> 2017-09-14 03:10:23,322-04 DEBUG
>>> [org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [49acf013]
>>> nextEvent: Log ERROR Yum [u'ERROR with transaction check vs depsolve:',
>>> 'iptables = 1.4.21-18.0.1.el7.centos is needed by
>>> iptables-services-1.4.21-18.0.1.el7.centos.x86_64']
>
>
> Where do you get this error? From otopi?

It's likely an indirect dependency. Perhaps from ovirt-hosted-engine-setup.

Can you try this patch:

https://gerrit.ovirt.org/81730

> The issue may be related to having an older iptables version pre-installed
> in the image
>
>>>
>>>
>>> 2. I'm not sure why we need it - or is it required still even though
>>> we've moved to firewalld?
>
>
> Firewalld is a front-end for iptables.

The error is about iptables-services, which is the package providing
an iptables "service".

>
>
>>>
>>> 3. I think I'll just update iptables...
>
>
> You mean in the deploy script?
>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel



-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-14 Thread Barak Korren
On 14 September 2017 at 10:22, Eyal Edri  wrote:

>
>
> On Thu, Sep 14, 2017 at 10:19 AM, Yaniv Kaul  wrote:
>
>>
>> I'm almost there, I'm stuck on:
>> 2017-09-14 03:10:23,322-04 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser]
>> (VdsDeploy) [49acf013] nextEvent: Log ERROR Yum [u'ERROR with transaction
>> check vs depsolve:', 'iptables = 1.4.21-18.0.1.el7.centos is needed by
>> iptables-services-1.4.21-18.0.1.el7.centos.x86_64']
>>
>
Where do you get this error? From otopi?
The issue may be related to having an older iptables version pre-installed
in the image


>
>> 2. I'm not sure why we need it - or is it required still even though
>> we've moved to firewalld?
>>
>
Firewalld is a front-end for iptables.



> 3. I think I'll just update iptables...
>>
>
You mean in the deploy script?



-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-14 Thread Eyal Edri
On Thu, Sep 14, 2017 at 10:19 AM, Yaniv Kaul  wrote:

>
>
> On Thu, Sep 14, 2017 at 9:25 AM, Eyal Edri  wrote:
>
>>
>>
>> On Thu, Sep 14, 2017 at 8:53 AM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Sep 13, 2017 9:38 PM, "Marc Young" <3vilpeng...@gmail.com> wrote:
>>>
>>> I finally got it past the hurdle. fwiw this is the git diff to
>>> reposync-config.repo: https://paste.fedoraproject.org/paste/
>>> SX6Hgnagzzhh~UE~X0PfEg
>>>
>>>
>> Thanks for the help!
>>
>>
>>>
>>> Thanks, I've also been working on sorting this out and I hope I'm almost
>>> there wrt updated packages, etc.
>>>
>>
>> I'm guessing it's [1], thanks for help, let me know if I can assist as
>> well in it.
>>
>> [1] https://gerrit.ovirt.org/#/c/81724/
>>
>
> I'm almost there, I'm stuck on:
> 2017-09-14 03:10:23,322-04 DEBUG [org.ovirt.otopi.dialog.MachineDialogParser]
> (VdsDeploy) [49acf013] nextEvent: Log ERROR Yum [u'ERROR with transaction
> check vs depsolve:', 'iptables = 1.4.21-18.0.1.el7.centos is needed by
> iptables-services-1.4.21-18.0.1.el7.centos.x86_64']
>
>
> when installing the host, which doesn't make sense to me because:
> 1. I have both RPMs available, and can install them:
> [root@lago-basic-suite-master-host-0 ~]# yum install iptables
> iptables-services
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
> Resolving Dependencies
> --> Running transaction check
> ---> Package iptables.x86_64 0:1.4.21-17.el7 will be updated
> ---> Package iptables.x86_64 0:1.4.21-18.0.1.el7.centos will be an update
> ---> Package iptables-services.x86_64 0:1.4.21-18.0.1.el7.centos will be
> installed
> --> Finished Dependency Resolution
>
> Dependencies Resolved
>
> 
> 
> 
> ===
>  Package  Arch
>   Version
>   Repository Size
> 
> 
> 
> ===
> Installing:
>  iptables-servicesx86_64
>   1.4.21-18.0.1.el7.centos
>  alocalsync 51 k
> Updating:
>  iptables x86_64
>   1.4.21-18.0.1.el7.centos
>  alocalsync428 k
>
> Transaction Summary
> 
> 
> 
> ===
> Install  1 Package
> Upgrade  1 Package
>
>
> 2. I'm not sure why we need it - or is it required still even though we've
> moved to firewalld?
> 3. I think I'll just update iptables...
>


There is another issue, which is visible only on CI though, the removal of
the ovirt-4.0 repo from centos.
I'm trying to fix it via https://gerrit.ovirt.org/#/c/81729/



>
> Y.
>
>
>>
>>
>>> Y.
>>>
>>>
>>> On Wed, Sep 13, 2017 at 10:36 AM, Eyal Edri  wrote:
>>>
 This might be related to the fact CentOS 7.4 is coming,
 Due to the way we handle yum repos in OST ( we use reposync and specify
 which pkgs to include/exclude per repo, so the run will be done locally and
 not depend on external sources ).

 It doesn't look consistent though, because some jobs are still working,
 for e.g, I just triggered the manual job and it looks OK [1]

 Unfortunately, we don't have a good alternative for this approach yet,
 so we'll need to update the list of include/exclude files in the
 reposync.repo file,
 Hopefully, this should be resolved by tomorrow once we figure out which
 pkgs needs to be updated in the file, you can always remove the 'include'
 line, but then it will sync the entire Centos repo,
 which might takes hours.

 There might be an option to skip 'reposync' but I'm not sure if its
 supported yet, adding Gal to be sure.

 [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job
 /ovirt-system-tests_manual/1145/console


 On Wed, Sep 13, 2017 at 5:57 PM, Marc Young <3vilpeng...@gmail.com>
 wrote:

> While rerunning some local tests, I encountered a new error:
>
> + lago ovirt deploy
> @ Deploy oVirt environment:
>   # Deploy environment:
> * [Thread-2] Deploy VM lago-vagrant-engine:
> * [Thread-3] Deploy VM lago-vagrant-host-0:
> * [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
> STDERR
> + 

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-14 Thread Yaniv Kaul
On Thu, Sep 14, 2017 at 9:25 AM, Eyal Edri  wrote:

>
>
> On Thu, Sep 14, 2017 at 8:53 AM, Yaniv Kaul  wrote:
>
>>
>>
>> On Sep 13, 2017 9:38 PM, "Marc Young" <3vilpeng...@gmail.com> wrote:
>>
>> I finally got it past the hurdle. fwiw this is the git diff to
>> reposync-config.repo: https://paste.fedoraproject.org/paste/
>> SX6Hgnagzzhh~UE~X0PfEg
>>
>>
> Thanks for the help!
>
>
>>
>> Thanks, I've also been working on sorting this out and I hope I'm almost
>> there wrt updated packages, etc.
>>
>
> I'm guessing it's [1], thanks for help, let me know if I can assist as
> well in it.
>
> [1] https://gerrit.ovirt.org/#/c/81724/
>

I'm almost there, I'm stuck on:
2017-09-14 03:10:23,322-04 DEBUG
[org.ovirt.otopi.dialog.MachineDialogParser] (VdsDeploy) [49acf013]
nextEvent: Log ERROR Yum [u'ERROR with transaction check vs depsolve:',
'iptables = 1.4.21-18.0.1.el7.centos is needed by
iptables-services-1.4.21-18.0.1.el7.centos.x86_64']


when installing the host, which doesn't make sense to me because:
1. I have both RPMs available, and can install them:
[root@lago-basic-suite-master-host-0 ~]# yum install iptables
iptables-services
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package iptables.x86_64 0:1.4.21-17.el7 will be updated
---> Package iptables.x86_64 0:1.4.21-18.0.1.el7.centos will be an update
---> Package iptables-services.x86_64 0:1.4.21-18.0.1.el7.centos will be
installed
--> Finished Dependency Resolution

Dependencies Resolved

===
 Package  Arch
Version
Repository Size
===
Installing:
 iptables-servicesx86_64
1.4.21-18.0.1.el7.centos
   alocalsync 51 k
Updating:
 iptables x86_64
1.4.21-18.0.1.el7.centos
   alocalsync428 k

Transaction Summary
===
Install  1 Package
Upgrade  1 Package


2. I'm not sure why we need it - or is it required still even though we've
moved to firewalld?
3. I think I'll just update iptables...

Y.


>
>
>> Y.
>>
>>
>> On Wed, Sep 13, 2017 at 10:36 AM, Eyal Edri  wrote:
>>
>>> This might be related to the fact CentOS 7.4 is coming,
>>> Due to the way we handle yum repos in OST ( we use reposync and specify
>>> which pkgs to include/exclude per repo, so the run will be done locally and
>>> not depend on external sources ).
>>>
>>> It doesn't look consistent though, because some jobs are still working,
>>> for e.g, I just triggered the manual job and it looks OK [1]
>>>
>>> Unfortunately, we don't have a good alternative for this approach yet,
>>> so we'll need to update the list of include/exclude files in the
>>> reposync.repo file,
>>> Hopefully, this should be resolved by tomorrow once we figure out which
>>> pkgs needs to be updated in the file, you can always remove the 'include'
>>> line, but then it will sync the entire Centos repo,
>>> which might takes hours.
>>>
>>> There might be an option to skip 'reposync' but I'm not sure if its
>>> supported yet, adding Gal to be sure.
>>>
>>> [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job
>>> /ovirt-system-tests_manual/1145/console
>>>
>>>
>>> On Wed, Sep 13, 2017 at 5:57 PM, Marc Young <3vilpeng...@gmail.com>
>>> wrote:
>>>
 While rerunning some local tests, I encountered a new error:

 + lago ovirt deploy
 @ Deploy oVirt environment:
   # Deploy environment:
 * [Thread-2] Deploy VM lago-vagrant-engine:
 * [Thread-3] Deploy VM lago-vagrant-host-0:
 * [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
 STDERR
 + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
 + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
 + NUM_LUNS=5
 + EL7='release 7\.[0-9]'
 + main
 + install_deps
 + systemctl disable --now kdump.service
 + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2
 targetcli sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
 Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
Requires: krb5-libs >= 1.15

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-14 Thread Eyal Edri
On Thu, Sep 14, 2017 at 8:53 AM, Yaniv Kaul  wrote:

>
>
> On Sep 13, 2017 9:38 PM, "Marc Young" <3vilpeng...@gmail.com> wrote:
>
> I finally got it past the hurdle. fwiw this is the git diff to
> reposync-config.repo: https://paste.fedoraproject.org/paste/
> SX6Hgnagzzhh~UE~X0PfEg
>
>
Thanks for the help!


>
> Thanks, I've also been working on sorting this out and I hope I'm almost
> there wrt updated packages, etc.
>

I'm guessing it's [1], thanks for help, let me know if I can assist as well
in it.

[1] https://gerrit.ovirt.org/#/c/81724/


> Y.
>
>
> On Wed, Sep 13, 2017 at 10:36 AM, Eyal Edri  wrote:
>
>> This might be related to the fact CentOS 7.4 is coming,
>> Due to the way we handle yum repos in OST ( we use reposync and specify
>> which pkgs to include/exclude per repo, so the run will be done locally and
>> not depend on external sources ).
>>
>> It doesn't look consistent though, because some jobs are still working,
>> for e.g, I just triggered the manual job and it looks OK [1]
>>
>> Unfortunately, we don't have a good alternative for this approach yet, so
>> we'll need to update the list of include/exclude files in the reposync.repo
>> file,
>> Hopefully, this should be resolved by tomorrow once we figure out which
>> pkgs needs to be updated in the file, you can always remove the 'include'
>> line, but then it will sync the entire Centos repo,
>> which might takes hours.
>>
>> There might be an option to skip 'reposync' but I'm not sure if its
>> supported yet, adding Gal to be sure.
>>
>> [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job
>> /ovirt-system-tests_manual/1145/console
>>
>>
>> On Wed, Sep 13, 2017 at 5:57 PM, Marc Young <3vilpeng...@gmail.com>
>> wrote:
>>
>>> While rerunning some local tests, I encountered a new error:
>>>
>>> + lago ovirt deploy
>>> @ Deploy oVirt environment:
>>>   # Deploy environment:
>>> * [Thread-2] Deploy VM lago-vagrant-engine:
>>> * [Thread-3] Deploy VM lago-vagrant-host-0:
>>> * [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
>>> STDERR
>>> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>>> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>>> + NUM_LUNS=5
>>> + EL7='release 7\.[0-9]'
>>> + main
>>> + install_deps
>>> + systemctl disable --now kdump.service
>>> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
>>> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
>>> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>>>Requires: krb5-libs >= 1.15
>>>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>>>krb5-libs = 1.14.1-27.el7_3
>>>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>>>krb5-libs = 1.14.1-26.el7
>>>
>>>   - STDERR
>>> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>>> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>>> + NUM_LUNS=5
>>> + EL7='release 7\.[0-9]'
>>> + main
>>> + install_deps
>>> + systemctl disable --now kdump.service
>>> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
>>> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
>>> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>>>Requires: krb5-libs >= 1.15
>>>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>>>krb5-libs = 1.14.1-27.el7_3
>>>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>>>krb5-libs = 1.14.1-26.el7
>>>
>>> * [Thread-2] Deploy VM lago-vagrant-engine: ERROR (in 0:01:05)
>>> Error while running thread
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
>>> _ret_via_queue
>>> queue.put({'return': func()})
>>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
>>> _deploy_host
>>> (script, ret, host.name(), ),
>>> RuntimeError: /home/myoung/repos/github/ovir
>>> t-system-tests/deployment-vagrant/default/scripts/_home_myou
>>> ng_repos_github_ovirt-system-tests_vagrant_.._common_deploy-
>>> scripts_setup_storage_unified_el7.sh failed with status 1 on
>>> lago-vagrant-engine
>>> Error while running thread
>>> Traceback (most recent call last):
>>>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
>>> _ret_via_queue
>>> queue.put({'return': func()})
>>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
>>> _deploy_host
>>> (script, ret, host.name(), ),
>>> RuntimeError: /home/myoung/repos/github/ovir
>>> t-system-tests/deployment-vagrant/default/scripts/_home_myou
>>> ng_repos_github_ovirt-system-tests_vagrant_.._common_deploy-
>>> scripts_setup_storage_unified_el7.sh failed with status 1 on
>>> lago-vagrant-engine
>>>   # Deploy environment: ERROR (in 0:01:05)
>>> @ Deploy oVirt environment: ERROR (in 0:01:05)
>>>
>>> Its the same error as here: http://jenkins.ovirt.org
>>> 

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-13 Thread Yaniv Kaul
On Sep 13, 2017 9:38 PM, "Marc Young" <3vilpeng...@gmail.com> wrote:

I finally got it past the hurdle. fwiw this is the git diff to
reposync-config.repo: https://paste.fedoraproject.org/paste/
SX6Hgnagzzhh~UE~X0PfEg


Thanks, I've also been working on sorting this out and I hope I'm almost
there wrt updated packages, etc.
Y.


On Wed, Sep 13, 2017 at 10:36 AM, Eyal Edri  wrote:

> This might be related to the fact CentOS 7.4 is coming,
> Due to the way we handle yum repos in OST ( we use reposync and specify
> which pkgs to include/exclude per repo, so the run will be done locally and
> not depend on external sources ).
>
> It doesn't look consistent though, because some jobs are still working,
> for e.g, I just triggered the manual job and it looks OK [1]
>
> Unfortunately, we don't have a good alternative for this approach yet, so
> we'll need to update the list of include/exclude files in the reposync.repo
> file,
> Hopefully, this should be resolved by tomorrow once we figure out which
> pkgs needs to be updated in the file, you can always remove the 'include'
> line, but then it will sync the entire Centos repo,
> which might takes hours.
>
> There might be an option to skip 'reposync' but I'm not sure if its
> supported yet, adding Gal to be sure.
>
> [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/job
> /ovirt-system-tests_manual/1145/console
>
>
> On Wed, Sep 13, 2017 at 5:57 PM, Marc Young <3vilpeng...@gmail.com> wrote:
>
>> While rerunning some local tests, I encountered a new error:
>>
>> + lago ovirt deploy
>> @ Deploy oVirt environment:
>>   # Deploy environment:
>> * [Thread-2] Deploy VM lago-vagrant-engine:
>> * [Thread-3] Deploy VM lago-vagrant-host-0:
>> * [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
>> STDERR
>> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>> + NUM_LUNS=5
>> + EL7='release 7\.[0-9]'
>> + main
>> + install_deps
>> + systemctl disable --now kdump.service
>> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
>> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
>> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>>Requires: krb5-libs >= 1.15
>>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>>krb5-libs = 1.14.1-27.el7_3
>>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>>krb5-libs = 1.14.1-26.el7
>>
>>   - STDERR
>> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>> + NUM_LUNS=5
>> + EL7='release 7\.[0-9]'
>> + main
>> + install_deps
>> + systemctl disable --now kdump.service
>> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
>> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
>> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>>Requires: krb5-libs >= 1.15
>>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>>krb5-libs = 1.14.1-27.el7_3
>>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>>krb5-libs = 1.14.1-26.el7
>>
>> * [Thread-2] Deploy VM lago-vagrant-engine: ERROR (in 0:01:05)
>> Error while running thread
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
>> _ret_via_queue
>> queue.put({'return': func()})
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
>> _deploy_host
>> (script, ret, host.name(), ),
>> RuntimeError: /home/myoung/repos/github/ovir
>> t-system-tests/deployment-vagrant/default/scripts/_home_myou
>> ng_repos_github_ovirt-system-tests_vagrant_.._common_deploy-
>> scripts_setup_storage_unified_el7.sh failed with status 1 on
>> lago-vagrant-engine
>> Error while running thread
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
>> _ret_via_queue
>> queue.put({'return': func()})
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
>> _deploy_host
>> (script, ret, host.name(), ),
>> RuntimeError: /home/myoung/repos/github/ovir
>> t-system-tests/deployment-vagrant/default/scripts/_home_myou
>> ng_repos_github_ovirt-system-tests_vagrant_.._common_deploy-
>> scripts_setup_storage_unified_el7.sh failed with status 1 on
>> lago-vagrant-engine
>>   # Deploy environment: ERROR (in 0:01:05)
>> @ Deploy oVirt environment: ERROR (in 0:01:05)
>>
>> Its the same error as here: http://jenkins.ovirt.org
>> /view/oVirt%20system%20tests/job/ovirt-system-tests_master_c
>> heck-patch-el7-x86_64/1683/console
>>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-13 Thread Marc Young
I finally got it past the hurdle. fwiw this is the git diff to
reposync-config.repo:
https://paste.fedoraproject.org/paste/SX6Hgnagzzhh~UE~X0PfEg

On Wed, Sep 13, 2017 at 10:36 AM, Eyal Edri  wrote:

> This might be related to the fact CentOS 7.4 is coming,
> Due to the way we handle yum repos in OST ( we use reposync and specify
> which pkgs to include/exclude per repo, so the run will be done locally and
> not depend on external sources ).
>
> It doesn't look consistent though, because some jobs are still working,
> for e.g, I just triggered the manual job and it looks OK [1]
>
> Unfortunately, we don't have a good alternative for this approach yet, so
> we'll need to update the list of include/exclude files in the reposync.repo
> file,
> Hopefully, this should be resolved by tomorrow once we figure out which
> pkgs needs to be updated in the file, you can always remove the 'include'
> line, but then it will sync the entire Centos repo,
> which might takes hours.
>
> There might be an option to skip 'reposync' but I'm not sure if its
> supported yet, adding Gal to be sure.
>
> [1] http://jenkins.ovirt.org/view/oVirt%20system%20tests/
> job/ovirt-system-tests_manual/1145/console
>
>
> On Wed, Sep 13, 2017 at 5:57 PM, Marc Young <3vilpeng...@gmail.com> wrote:
>
>> While rerunning some local tests, I encountered a new error:
>>
>> + lago ovirt deploy
>> @ Deploy oVirt environment:
>>   # Deploy environment:
>> * [Thread-2] Deploy VM lago-vagrant-engine:
>> * [Thread-3] Deploy VM lago-vagrant-host-0:
>> * [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
>> STDERR
>> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>> + NUM_LUNS=5
>> + EL7='release 7\.[0-9]'
>> + main
>> + install_deps
>> + systemctl disable --now kdump.service
>> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
>> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
>> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>>Requires: krb5-libs >= 1.15
>>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>>krb5-libs = 1.14.1-27.el7_3
>>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>>krb5-libs = 1.14.1-26.el7
>>
>>   - STDERR
>> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
>> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
>> + NUM_LUNS=5
>> + EL7='release 7\.[0-9]'
>> + main
>> + install_deps
>> + systemctl disable --now kdump.service
>> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
>> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
>> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>>Requires: krb5-libs >= 1.15
>>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>>krb5-libs = 1.14.1-27.el7_3
>>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>>krb5-libs = 1.14.1-26.el7
>>
>> * [Thread-2] Deploy VM lago-vagrant-engine: ERROR (in 0:01:05)
>> Error while running thread
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
>> _ret_via_queue
>> queue.put({'return': func()})
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
>> _deploy_host
>> (script, ret, host.name(), ),
>> RuntimeError: /home/myoung/repos/github/ovir
>> t-system-tests/deployment-vagrant/default/scripts/_home_myou
>> ng_repos_github_ovirt-system-tests_vagrant_.._common_
>> deploy-scripts_setup_storage_unified_el7.sh failed with status 1 on
>> lago-vagrant-engine
>> Error while running thread
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
>> _ret_via_queue
>> queue.put({'return': func()})
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
>> _deploy_host
>> (script, ret, host.name(), ),
>> RuntimeError: /home/myoung/repos/github/ovir
>> t-system-tests/deployment-vagrant/default/scripts/_home_myou
>> ng_repos_github_ovirt-system-tests_vagrant_.._common_
>> deploy-scripts_setup_storage_unified_el7.sh failed with status 1 on
>> lago-vagrant-engine
>>   # Deploy environment: ERROR (in 0:01:05)
>> @ Deploy oVirt environment: ERROR (in 0:01:05)
>>
>> Its the same error as here: http://jenkins.ovirt.org
>> /view/oVirt%20system%20tests/job/ovirt-system-tests_master_c
>> heck-patch-el7-x86_64/1683/console
>>
>>
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> Eyal edri
>
>
> ASSOCIATE MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>

Re: [ovirt-devel] ovirt-system-tests started failing locally

2017-09-13 Thread Eyal Edri
This might be related to the fact CentOS 7.4 is coming,
Due to the way we handle yum repos in OST ( we use reposync and specify
which pkgs to include/exclude per repo, so the run will be done locally and
not depend on external sources ).

It doesn't look consistent though, because some jobs are still working, for
e.g, I just triggered the manual job and it looks OK [1]

Unfortunately, we don't have a good alternative for this approach yet, so
we'll need to update the list of include/exclude files in the reposync.repo
file,
Hopefully, this should be resolved by tomorrow once we figure out which
pkgs needs to be updated in the file, you can always remove the 'include'
line, but then it will sync the entire Centos repo,
which might takes hours.

There might be an option to skip 'reposync' but I'm not sure if its
supported yet, adding Gal to be sure.

[1]
http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/1145/console


On Wed, Sep 13, 2017 at 5:57 PM, Marc Young <3vilpeng...@gmail.com> wrote:

> While rerunning some local tests, I encountered a new error:
>
> + lago ovirt deploy
> @ Deploy oVirt environment:
>   # Deploy environment:
> * [Thread-2] Deploy VM lago-vagrant-engine:
> * [Thread-3] Deploy VM lago-vagrant-host-0:
> * [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
> STDERR
> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
> + NUM_LUNS=5
> + EL7='release 7\.[0-9]'
> + main
> + install_deps
> + systemctl disable --now kdump.service
> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>Requires: krb5-libs >= 1.15
>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>krb5-libs = 1.14.1-27.el7_3
>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>krb5-libs = 1.14.1-26.el7
>
>   - STDERR
> + MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
> + ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
> + NUM_LUNS=5
> + EL7='release 7\.[0-9]'
> + main
> + install_deps
> + systemctl disable --now kdump.service
> + yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
> sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
> Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
>Requires: krb5-libs >= 1.15
>Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
>krb5-libs = 1.14.1-27.el7_3
>Available: krb5-libs-1.14.1-26.el7.i686 (base)
>krb5-libs = 1.14.1-26.el7
>
> * [Thread-2] Deploy VM lago-vagrant-engine: ERROR (in 0:01:05)
> Error while running thread
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
> _ret_via_queue
> queue.put({'return': func()})
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
> _deploy_host
> (script, ret, host.name(), ),
> RuntimeError: /home/myoung/repos/github/ovirt-system-tests/deployment-
> vagrant/default/scripts/_home_myoung_repos_github_ovirt-
> system-tests_vagrant_.._common_deploy-scripts_setup_storage_unified_el7.sh
> failed with status 1 on lago-vagrant-engine
> Error while running thread
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
> _ret_via_queue
> queue.put({'return': func()})
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
> _deploy_host
> (script, ret, host.name(), ),
> RuntimeError: /home/myoung/repos/github/ovirt-system-tests/deployment-
> vagrant/default/scripts/_home_myoung_repos_github_ovirt-
> system-tests_vagrant_.._common_deploy-scripts_setup_storage_unified_el7.sh
> failed with status 1 on lago-vagrant-engine
>   # Deploy environment: ERROR (in 0:01:05)
> @ Deploy oVirt environment: ERROR (in 0:01:05)
>
> Its the same error as here: http://jenkins.ovirt.org
> /view/oVirt%20system%20tests/job/ovirt-system-tests_master_
> check-patch-el7-x86_64/1683/console
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] ovirt-system-tests started failing locally

2017-09-13 Thread Marc Young
While rerunning some local tests, I encountered a new error:

+ lago ovirt deploy
@ Deploy oVirt environment:
  # Deploy environment:
* [Thread-2] Deploy VM lago-vagrant-engine:
* [Thread-3] Deploy VM lago-vagrant-host-0:
* [Thread-3] Deploy VM lago-vagrant-host-0: Success (in 0:00:45)
STDERR
+ MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
+ ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
+ NUM_LUNS=5
+ EL7='release 7\.[0-9]'
+ main
+ install_deps
+ systemctl disable --now kdump.service
+ yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
   Requires: krb5-libs >= 1.15
   Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
   krb5-libs = 1.14.1-27.el7_3
   Available: krb5-libs-1.14.1-26.el7.i686 (base)
   krb5-libs = 1.14.1-26.el7

  - STDERR
+ MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
+ ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
+ NUM_LUNS=5
+ EL7='release 7\.[0-9]'
+ main
+ install_deps
+ systemctl disable --now kdump.service
+ yum install -y --downloaddir=/dev/shm nfs-utils rpcbind lvm2 targetcli
sg3_utils iscsi-initiator-utils lsscsi policycoreutils-python
Error: Package: gssproxy-0.7.0-4.el7.x86_64 (alocalsync)
   Requires: krb5-libs >= 1.15
   Installed: krb5-libs-1.14.1-27.el7_3.x86_64 (installed)
   krb5-libs = 1.14.1-27.el7_3
   Available: krb5-libs-1.14.1-26.el7.i686 (base)
   krb5-libs = 1.14.1-26.el7

* [Thread-2] Deploy VM lago-vagrant-engine: ERROR (in 0:01:05)
Error while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
_deploy_host
(script, ret, host.name(), ),
RuntimeError:
/home/myoung/repos/github/ovirt-system-tests/deployment-vagrant/default/scripts/_home_myoung_repos_github_ovirt-system-tests_vagrant_.._common_deploy-scripts_setup_storage_unified_el7.sh
failed with status 1 on lago-vagrant-engine
Error while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1610, in
_deploy_host
(script, ret, host.name(), ),
RuntimeError:
/home/myoung/repos/github/ovirt-system-tests/deployment-vagrant/default/scripts/_home_myoung_repos_github_ovirt-system-tests_vagrant_.._common_deploy-scripts_setup_storage_unified_el7.sh
failed with status 1 on lago-vagrant-engine
  # Deploy environment: ERROR (in 0:01:05)
@ Deploy oVirt environment: ERROR (in 0:01:05)

Its the same error as here: http://jenkins.ovirt.org/view/oVirt%20system%
20tests/job/ovirt-system-tests_master_check-patch-el7-x86_64/1683/console
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests failure for basic-suite-4.1

2017-07-16 Thread Marc Young
That got it. The issue stems from this originally being an oVirt box and
repurposing it into a Jenkins box.
I also found an issue with /etc/libvirt/qemu-sanlock.conf but these are all
artifact issues. Thanks!

On Sun, Jul 16, 2017 at 3:03 AM, Roy Golan  wrote:

> AFAIR lago expects libvirt to run with no sasl (I guess this machine was
> vdsm before that?)
>
> I think if you just comment out every thing in /etc/libvirt/libvirtd.conf
> and restart libvirtd service it should work for you.
>
> On Sun, Jul 16, 2017 at 5:18 AM Marc Young <3vilpeng...@gmail.com> wrote:
>
>> Sorry it took a while to get back.
>>
>> libvirtd is running, but with a line that may be related, might not
>>
>> [myoung@server ovirt-system-tests]$ sudo service libvirtd status
>> Redirecting to /bin/systemctl status  libvirtd.service
>> ● libvirtd.service - Virtualization daemon
>>Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
>> vendor preset: enabled)
>>Active: active (running) since Sat 2017-07-15 21:14:05 CDT; 2min 24s
>> ago
>>  Docs: man:libvirtd(8)
>>http://libvirt.org
>>  Main PID: 3139 (libvirtd)
>>CGroup: /system.slice/libvirtd.service
>>├─2757 /sbin/dnsmasq 
>> --conf-file=/var/lib/libvirt/dnsmasq/default.conf
>> --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
>>├─2758 /sbin/dnsmasq 
>> --conf-file=/var/lib/libvirt/dnsmasq/default.conf
>> --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
>>└─3139 /usr/sbin/libvirtd --listen
>>
>> Jul 15 21:14:04 server.blindrage.local systemd[1]: Starting
>> Virtualization daemon...
>> Jul 15 21:14:05 server.blindrage.local systemd[1]: Started Virtualization
>> daemon.
>> Jul 15 21:14:05 server.blindrage.local dnsmasq[2757]: read /etc/hosts - 4
>> addresses
>> Jul 15 21:14:05 server.blindrage.local dnsmasq[2757]: read
>> /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
>> Jul 15 21:14:05 server.blindrage.local dnsmasq-dhcp[2757]: read
>> /var/lib/libvirt/dnsmasq/default.hostsfile
>> Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: libvirt version:
>> 2.0.0, package: 10.el7_3.9 (CentOS BuildSystem ,
>> 2017-05-25-20:52:28, c1bm.rdu2.centos.org)
>> Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: hostname:
>> server.blindrage.local
>> Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: End of file while
>> reading data: Input/output error
>>
>>
>> system tests still fail:
>>
>>   # Copying any deploy scripts: Success (in 0:00:00)
>> libvirt: XML-RPC error : authentication failed: Failed to start SASL
>> negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
>> a prompt result)
>>   # Missing current link, setting it to default
>>
>> @ Initialize and populate prefix: ERROR (in 0:00:01)
>>
>> Error occured, aborting
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
>> cli_plugins[args.verb].do_run(args)
>>   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
>> in do_run
>> self._do_run(**vars(args))
>>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 190, in
>> do_init
>> do_build=not skip_build,
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1089, in
>> virt_conf_from_stream
>> do_build=do_build
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1214, in
>> virt_conf
>> net_specs=conf['nets'],
>>   File "/usr/lib/python2.7/site-packages/lago/virt.py", line 90, in
>> __init__
>> libvirt_url=libvirt_url,
>>   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/utils.py",
>> line 87, in get_libvirt_connection
>> LIBVIRT_CONNECTIONS[name] = libvirt.openAuth(libvirt_url, auth)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in
>> openAuth
>> if ret is None:raise libvirtError('virConnectOpenAuth() failed')
>> libvirtError: authentication failed: Failed to start SASL negotiation: -7
>> (SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)
>>
>>
>> On Fri, Jul 14, 2017 at 6:43 AM, Marc Young <3vilpeng...@gmail.com>
>> wrote:
>>
>>> I did set up libvirtd per docs but I'll double check to make sure human
>>> error isn't the problem
>>>
>>> On Fri, Jul 14, 2017, 6:33 AM Eyal Edri  wrote:
>>>
 Hey Marc,

 Thanks for trying out OST!
 From a first look, lt looks like libvirtd isn't running or configured
 properly.
 oVirt system tests rely on Lago to work, have you gone through the
 install steps to make sure Lago is installed properly?


 You can check out recent documentation at [1] or [2]. Once you have
 Lago up and running, you can run OST.

 [1] http://lago.readthedocs.io/en/stable/
 [2] http://ovirt-system-tests.readthedocs.io/en/latest/docs/
 general/installation.html


 On Fri, Jul 14, 2017 at 1:56 PM, Marc 

Re: [ovirt-devel] ovirt-system-tests failure for basic-suite-4.1

2017-07-16 Thread Roy Golan
AFAIR lago expects libvirt to run with no sasl (I guess this machine was
vdsm before that?)

I think if you just comment out every thing in /etc/libvirt/libvirtd.conf
and restart libvirtd service it should work for you.

On Sun, Jul 16, 2017 at 5:18 AM Marc Young <3vilpeng...@gmail.com> wrote:

> Sorry it took a while to get back.
>
> libvirtd is running, but with a line that may be related, might not
>
> [myoung@server ovirt-system-tests]$ sudo service libvirtd status
> Redirecting to /bin/systemctl status  libvirtd.service
> ● libvirtd.service - Virtualization daemon
>Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
> vendor preset: enabled)
>Active: active (running) since Sat 2017-07-15 21:14:05 CDT; 2min 24s ago
>  Docs: man:libvirtd(8)
>http://libvirt.org
>  Main PID: 3139 (libvirtd)
>CGroup: /system.slice/libvirtd.service
>├─2757 /sbin/dnsmasq
> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>├─2758 /sbin/dnsmasq
> --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
> --dhcp-script=/usr/libexec/libvirt_leaseshelper
>└─3139 /usr/sbin/libvirtd --listen
>
> Jul 15 21:14:04 server.blindrage.local systemd[1]: Starting Virtualization
> daemon...
> Jul 15 21:14:05 server.blindrage.local systemd[1]: Started Virtualization
> daemon.
> Jul 15 21:14:05 server.blindrage.local dnsmasq[2757]: read /etc/hosts - 4
> addresses
> Jul 15 21:14:05 server.blindrage.local dnsmasq[2757]: read
> /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
> Jul 15 21:14:05 server.blindrage.local dnsmasq-dhcp[2757]: read
> /var/lib/libvirt/dnsmasq/default.hostsfile
> Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: libvirt version:
> 2.0.0, package: 10.el7_3.9 (CentOS BuildSystem ,
> 2017-05-25-20:52:28, c1bm.rdu2.centos.org)
> Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: hostname:
> server.blindrage.local
> Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: End of file while
> reading data: Input/output error
>
>
> system tests still fail:
>
>   # Copying any deploy scripts: Success (in 0:00:00)
> libvirt: XML-RPC error : authentication failed: Failed to start SASL
> negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
> a prompt result)
>   # Missing current link, setting it to default
>
> @ Initialize and populate prefix: ERROR (in 0:00:01)
>
> Error occured, aborting
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
> cli_plugins[args.verb].do_run(args)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
> in do_run
> self._do_run(**vars(args))
>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 190, in do_init
> do_build=not skip_build,
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1089, in
> virt_conf_from_stream
> do_build=do_build
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1214, in
> virt_conf
> net_specs=conf['nets'],
>   File "/usr/lib/python2.7/site-packages/lago/virt.py", line 90, in
> __init__
> libvirt_url=libvirt_url,
>   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/utils.py",
> line 87, in get_libvirt_connection
> LIBVIRT_CONNECTIONS[name] = libvirt.openAuth(libvirt_url, auth)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in
> openAuth
> if ret is None:raise libvirtError('virConnectOpenAuth() failed')
> libvirtError: authentication failed: Failed to start SASL negotiation: -7
> (SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)
>
>
> On Fri, Jul 14, 2017 at 6:43 AM, Marc Young <3vilpeng...@gmail.com> wrote:
>
>> I did set up libvirtd per docs but I'll double check to make sure human
>> error isn't the problem
>>
>> On Fri, Jul 14, 2017, 6:33 AM Eyal Edri  wrote:
>>
>>> Hey Marc,
>>>
>>> Thanks for trying out OST!
>>> From a first look, lt looks like libvirtd isn't running or configured
>>> properly.
>>> oVirt system tests rely on Lago to work, have you gone through the
>>> install steps to make sure Lago is installed properly?
>>>
>>>
>>> You can check out recent documentation at [1] or [2]. Once you have Lago
>>> up and running, you can run OST.
>>>
>>> [1] http://lago.readthedocs.io/en/stable/
>>> [2]
>>> http://ovirt-system-tests.readthedocs.io/en/latest/docs/general/installation.html
>>>
>>>
>>> On Fri, Jul 14, 2017 at 1:56 PM, Marc Young <3vilpeng...@gmail.com>
>>> wrote:
>>>
 I'm following the introduction docs[1] to get familiar with the
 ovirt-system-tests and encountered a failure on first run. I dug around but
 I'm not familiar enough to know if it's something I set up incorrectly, a
 real failure, etc. Git ref is a3b1753

 Shortened error below [2]
 full Output far below[3]

 [1]
 

Re: [ovirt-devel] ovirt-system-tests failure for basic-suite-4.1

2017-07-15 Thread Marc Young
Sorry it took a while to get back.

libvirtd is running, but with a line that may be related, might not

[myoung@server ovirt-system-tests]$ sudo service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
vendor preset: enabled)
   Active: active (running) since Sat 2017-07-15 21:14:05 CDT; 2min 24s ago
 Docs: man:libvirtd(8)
   http://libvirt.org
 Main PID: 3139 (libvirtd)
   CGroup: /system.slice/libvirtd.service
   ├─2757 /sbin/dnsmasq
--conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
--dhcp-script=/usr/libexec/libvirt_leaseshelper
   ├─2758 /sbin/dnsmasq
--conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro
--dhcp-script=/usr/libexec/libvirt_leaseshelper
   └─3139 /usr/sbin/libvirtd --listen

Jul 15 21:14:04 server.blindrage.local systemd[1]: Starting Virtualization
daemon...
Jul 15 21:14:05 server.blindrage.local systemd[1]: Started Virtualization
daemon.
Jul 15 21:14:05 server.blindrage.local dnsmasq[2757]: read /etc/hosts - 4
addresses
Jul 15 21:14:05 server.blindrage.local dnsmasq[2757]: read
/var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 15 21:14:05 server.blindrage.local dnsmasq-dhcp[2757]: read
/var/lib/libvirt/dnsmasq/default.hostsfile
Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: libvirt version:
2.0.0, package: 10.el7_3.9 (CentOS BuildSystem ,
2017-05-25-20:52:28, c1bm.rdu2.centos.org)
Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: hostname:
server.blindrage.local
Jul 15 21:15:43 server.blindrage.local libvirtd[3139]: End of file while
reading data: Input/output error


system tests still fail:

  # Copying any deploy scripts: Success (in 0:00:00)
libvirt: XML-RPC error : authentication failed: Failed to start SASL
negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
a prompt result)
  # Missing current link, setting it to default
@ Initialize and populate prefix: ERROR (in 0:00:01)
Error occured, aborting
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
cli_plugins[args.verb].do_run(args)
  File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in
do_run
self._do_run(**vars(args))
  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 190, in do_init
do_build=not skip_build,
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1089, in
virt_conf_from_stream
do_build=do_build
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1214, in
virt_conf
net_specs=conf['nets'],
  File "/usr/lib/python2.7/site-packages/lago/virt.py", line 90, in __init__
libvirt_url=libvirt_url,
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/utils.py",
line 87, in get_libvirt_connection
LIBVIRT_CONNECTIONS[name] = libvirt.openAuth(libvirt_url, auth)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in
openAuth
if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: authentication failed: Failed to start SASL negotiation: -7
(SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)


On Fri, Jul 14, 2017 at 6:43 AM, Marc Young <3vilpeng...@gmail.com> wrote:

> I did set up libvirtd per docs but I'll double check to make sure human
> error isn't the problem
>
> On Fri, Jul 14, 2017, 6:33 AM Eyal Edri  wrote:
>
>> Hey Marc,
>>
>> Thanks for trying out OST!
>> From a first look, lt looks like libvirtd isn't running or configured
>> properly.
>> oVirt system tests rely on Lago to work, have you gone through the
>> install steps to make sure Lago is installed properly?
>>
>>
>> You can check out recent documentation at [1] or [2]. Once you have Lago
>> up and running, you can run OST.
>>
>> [1] http://lago.readthedocs.io/en/stable/
>> [2] http://ovirt-system-tests.readthedocs.io/en/latest/docs/
>> general/installation.html
>>
>>
>> On Fri, Jul 14, 2017 at 1:56 PM, Marc Young <3vilpeng...@gmail.com>
>> wrote:
>>
>>> I'm following the introduction docs[1] to get familiar with the
>>> ovirt-system-tests and encountered a failure on first run. I dug around but
>>> I'm not familiar enough to know if it's something I set up incorrectly, a
>>> real failure, etc. Git ref is a3b1753
>>>
>>> Shortened error below [2]
>>> full Output far below[3]
>>>
>>> [1] http://ovirt-system-tests.readthedocs.io/en/latest/docs/
>>> general/running_tests.html
>>>
>>>
>>> [2]
>>>
>>> libvirt: XML-RPC error : authentication failed: Failed to start SASL
>>> negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
>>> a prompt result)
>>>   # Missing current link, setting it to default
>>> @ Initialize and populate prefix: ERROR (in 0:02:12)
>>> Error occured, aborting
>>> Traceback (most recent call last):
>>>   File 

Re: [ovirt-devel] ovirt-system-tests failure for basic-suite-4.1

2017-07-14 Thread Marc Young
I did set up libvirtd per docs but I'll double check to make sure human
error isn't the problem

On Fri, Jul 14, 2017, 6:33 AM Eyal Edri  wrote:

> Hey Marc,
>
> Thanks for trying out OST!
> From a first look, lt looks like libvirtd isn't running or configured
> properly.
> oVirt system tests rely on Lago to work, have you gone through the install
> steps to make sure Lago is installed properly?
>
>
> You can check out recent documentation at [1] or [2]. Once you have Lago
> up and running, you can run OST.
>
> [1] http://lago.readthedocs.io/en/stable/
> [2]
> http://ovirt-system-tests.readthedocs.io/en/latest/docs/general/installation.html
>
>
> On Fri, Jul 14, 2017 at 1:56 PM, Marc Young <3vilpeng...@gmail.com> wrote:
>
>> I'm following the introduction docs[1] to get familiar with the
>> ovirt-system-tests and encountered a failure on first run. I dug around but
>> I'm not familiar enough to know if it's something I set up incorrectly, a
>> real failure, etc. Git ref is a3b1753
>>
>> Shortened error below [2]
>> full Output far below[3]
>>
>> [1]
>> http://ovirt-system-tests.readthedocs.io/en/latest/docs/general/running_tests.html
>>
>>
>> [2]
>>
>> libvirt: XML-RPC error : authentication failed: Failed to start SASL
>> negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
>> a prompt result)
>>   # Missing current link, setting it to default
>> @ Initialize and populate prefix: ERROR (in 0:02:12)
>> Error occured, aborting
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
>> cli_plugins[args.verb].do_run(args)
>>   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
>> in do_run
>> self._do_run(**vars(args))
>>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 190, in
>> do_init
>> do_build=not skip_build,
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1089, in
>> virt_conf_from_stream
>> do_build=do_build
>>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1214, in
>> virt_conf
>> net_specs=conf['nets'],
>>   File "/usr/lib/python2.7/site-packages/lago/virt.py", line 90, in
>> __init__
>> libvirt_url=libvirt_url,
>>   File
>> "/usr/lib/python2.7/site-packages/lago/providers/libvirt/utils.py", line
>> 87, in get_libvirt_connection
>> LIBVIRT_CONNECTIONS[name] = libvirt.openAuth(libvirt_url, auth)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in
>> openAuth
>> if ret is None:raise libvirtError('virConnectOpenAuth() failed')
>> libvirtError: authentication failed: Failed to start SASL negotiation: -7
>> (SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)
>>
>>
>> [3]
>>
>> [myoung@server ovirt-system-tests]$ ./run_suite.sh basic-suite-4.1
>> + CLI=lago
>> + DO_CLEANUP=false
>> + RECOMMENDED_RAM_IN_MB=8196
>> + EXTRA_SOURCES=()
>> + RPMS_TO_INSTALL=()
>> ++ getopt -o ho:e:n:b:cs:r:l:i --long
>> help,output:,engine:,node:,boot-iso:,cleanup,images --long
>> extra-rpm-source,reposync-config:,local-rpms: -n run_suite.sh --
>> basic-suite-4.1
>>  + options=' -- '\''basic-suite-4.1'\'''
>> + [[ 0 != \0 ]]
>> + eval set -- ' -- '\''basic-suite-4.1'\'''
>> ++ set -- -- basic-suite-4.1
>> + true
>> + case $1 in
>> + shift
>> + break
>> + [[ -z basic-suite-4.1 ]]
>> + export OST_REPO_ROOT=/home/myoung/ovirt-system-tests
>> + OST_REPO_ROOT=/home/myoung/ovirt-system-tests
>> ++ realpath basic-suite-4.1
>> + export SUITE=/home/myoung/ovirt-system-tests/basic-suite-4.1
>> + SUITE=/home/myoung/ovirt-system-tests/basic-suite-4.1
>> + '[' -z '' ']'
>> + export PREFIX=/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
>> + PREFIX=/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
>> + false
>> + [[ -d /home/myoung/ovirt-system-tests/basic-suite-4.1 ]]
>> + echo '# lago version'
>> # lago version
>>
>>
>> + lago --version
>>
>>
>>   lago 0.40.0
>> + echo '#'
>> #
>> + check_ram 8196
>> + local recommended=8196
>> ++ free -m
>> ++ grep Mem
>>
>>
>>++ awk '{print $2}'
>> + local cur_ram=7812
>> + [[ 7812 -lt 8196 ]]
>> + echo 'It'\''s recommended to have at least 8196MB of RAM' 'installed on
>> the system to run the system tests, if you find' 'issues while running
>> them, consider upgrading your system.' '(only detected 7812MB installed)'
>> It's recommended to have at least 8196MB of RAM installed on the system
>> to run the system tests, if you find issues while running them, consider
>> upgrading your system. (only detected 7812MB installed)
>> + echo 'Running suite found in
>> /home/myoung/ovirt-system-tests/basic-suite-4.1'
>> Running suite found in /home/myoung/ovirt-system-tests/basic-suite-4.1
>> + echo 'Environment will be deployed at
>> /home/myoung/ovirt-system-tests/deployment-basic-suite-4.1'
>> Environment will be deployed at
>> 

Re: [ovirt-devel] ovirt-system-tests failure for basic-suite-4.1

2017-07-14 Thread Eyal Edri
Hey Marc,

Thanks for trying out OST!
>From a first look, lt looks like libvirtd isn't running or configured
properly.
oVirt system tests rely on Lago to work, have you gone through the install
steps to make sure Lago is installed properly?


You can check out recent documentation at [1] or [2]. Once you have Lago up
and running, you can run OST.

[1] http://lago.readthedocs.io/en/stable/
[2]
http://ovirt-system-tests.readthedocs.io/en/latest/docs/general/installation.html


On Fri, Jul 14, 2017 at 1:56 PM, Marc Young <3vilpeng...@gmail.com> wrote:

> I'm following the introduction docs[1] to get familiar with the
> ovirt-system-tests and encountered a failure on first run. I dug around but
> I'm not familiar enough to know if it's something I set up incorrectly, a
> real failure, etc. Git ref is a3b1753
>
> Shortened error below [2]
> full Output far below[3]
>
> [1] http://ovirt-system-tests.readthedocs.io/en/latest/docs/
> general/running_tests.html
>
>
> [2]
>
> libvirt: XML-RPC error : authentication failed: Failed to start SASL
> negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
> a prompt result)
>   # Missing current link, setting it to default
> @ Initialize and populate prefix: ERROR (in 0:02:12)
> Error occured, aborting
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
> cli_plugins[args.verb].do_run(args)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184,
> in do_run
> self._do_run(**vars(args))
>   File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 190, in
> do_init
> do_build=not skip_build,
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1089, in
> virt_conf_from_stream
> do_build=do_build
>   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1214, in
> virt_conf
> net_specs=conf['nets'],
>   File "/usr/lib/python2.7/site-packages/lago/virt.py", line 90, in
> __init__
> libvirt_url=libvirt_url,
>   File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/utils.py",
> line 87, in get_libvirt_connection
> LIBVIRT_CONNECTIONS[name] = libvirt.openAuth(libvirt_url, auth)
>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in
> openAuth
> if ret is None:raise libvirtError('virConnectOpenAuth() failed')
> libvirtError: authentication failed: Failed to start SASL negotiation: -7
> (SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)
>
>
> [3]
>
> [myoung@server ovirt-system-tests]$ ./run_suite.sh basic-suite-4.1
> + CLI=lago
> + DO_CLEANUP=false
> + RECOMMENDED_RAM_IN_MB=8196
> + EXTRA_SOURCES=()
> + RPMS_TO_INSTALL=()
> ++ getopt -o ho:e:n:b:cs:r:l:i --long 
> help,output:,engine:,node:,boot-iso:,cleanup,images
> --long extra-rpm-source,reposync-config:,local-rpms: -n run_suite.sh --
> basic-suite-4.1
>  + options=' -- '\''basic-suite-4.1'\'''
> + [[ 0 != \0 ]]
> + eval set -- ' -- '\''basic-suite-4.1'\'''
> ++ set -- -- basic-suite-4.1
> + true
> + case $1 in
> + shift
> + break
> + [[ -z basic-suite-4.1 ]]
> + export OST_REPO_ROOT=/home/myoung/ovirt-system-tests
> + OST_REPO_ROOT=/home/myoung/ovirt-system-tests
> ++ realpath basic-suite-4.1
> + export SUITE=/home/myoung/ovirt-system-tests/basic-suite-4.1
> + SUITE=/home/myoung/ovirt-system-tests/basic-suite-4.1
> + '[' -z '' ']'
> + export PREFIX=/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
> + PREFIX=/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
> + false
> + [[ -d /home/myoung/ovirt-system-tests/basic-suite-4.1 ]]
> + echo '# lago version'
> # lago version
>
>
>   + lago --version
>
>
> lago 0.40.0
> + echo '#'
> #
> + check_ram 8196
> + local recommended=8196
> ++ free -m
> ++ grep Mem
>
>
>++ awk '{print $2}'
> + local cur_ram=7812
> + [[ 7812 -lt 8196 ]]
> + echo 'It'\''s recommended to have at least 8196MB of RAM' 'installed on
> the system to run the system tests, if you find' 'issues while running
> them, consider upgrading your system.' '(only detected 7812MB installed)'
> It's recommended to have at least 8196MB of RAM installed on the system to
> run the system tests, if you find issues while running them, consider
> upgrading your system. (only detected 7812MB installed)
> + echo 'Running suite found in /home/myoung/ovirt-system-
> tests/basic-suite-4.1'
> Running suite found in /home/myoung/ovirt-system-tests/basic-suite-4.1
> + echo 'Environment will be deployed at /home/myoung/ovirt-system-
> tests/deployment-basic-suite-4.1'
> Environment will be deployed at /home/myoung/ovirt-system-
> tests/deployment-basic-suite-4.1
> + rm -rf /home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
> + export PYTHONPATH=:/home/myoung/ovirt-system-tests/basic-suite-4.1
> + PYTHONPATH=:/home/myoung/ovirt-system-tests/basic-suite-4.1
> + source 

[ovirt-devel] ovirt-system-tests failure for basic-suite-4.1

2017-07-14 Thread Marc Young
I'm following the introduction docs[1] to get familiar with the
ovirt-system-tests and encountered a failure on first run. I dug around but
I'm not familiar enough to know if it's something I set up incorrectly, a
real failure, etc. Git ref is a3b1753

Shortened error below [2]
full Output far below[3]

[1]
http://ovirt-system-tests.readthedocs.io/en/latest/docs/general/running_tests.html


[2]

libvirt: XML-RPC error : authentication failed: Failed to start SASL
negotiation: -7 (SASL(-7): invalid parameter supplied: Unexpectedly missing
a prompt result)
  # Missing current link, setting it to default
@ Initialize and populate prefix: ERROR (in 0:02:12)
Error occured, aborting
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 954, in main
cli_plugins[args.verb].do_run(args)
  File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in
do_run
self._do_run(**vars(args))
  File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 190, in do_init
do_build=not skip_build,
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1089, in
virt_conf_from_stream
do_build=do_build
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1214, in
virt_conf
net_specs=conf['nets'],
  File "/usr/lib/python2.7/site-packages/lago/virt.py", line 90, in __init__
libvirt_url=libvirt_url,
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/utils.py",
line 87, in get_libvirt_connection
LIBVIRT_CONNECTIONS[name] = libvirt.openAuth(libvirt_url, auth)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in
openAuth
if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: authentication failed: Failed to start SASL negotiation: -7
(SASL(-7): invalid parameter supplied: Unexpectedly missing a prompt result)


[3]

[myoung@server ovirt-system-tests]$ ./run_suite.sh basic-suite-4.1
+ CLI=lago
+ DO_CLEANUP=false
+ RECOMMENDED_RAM_IN_MB=8196
+ EXTRA_SOURCES=()
+ RPMS_TO_INSTALL=()
++ getopt -o ho:e:n:b:cs:r:l:i --long
help,output:,engine:,node:,boot-iso:,cleanup,images --long
extra-rpm-source,reposync-config:,local-rpms: -n run_suite.sh --
basic-suite-4.1
 + options=' -- '\''basic-suite-4.1'\'''
+ [[ 0 != \0 ]]
+ eval set -- ' -- '\''basic-suite-4.1'\'''
++ set -- -- basic-suite-4.1
+ true
+ case $1 in
+ shift
+ break
+ [[ -z basic-suite-4.1 ]]
+ export OST_REPO_ROOT=/home/myoung/ovirt-system-tests
+ OST_REPO_ROOT=/home/myoung/ovirt-system-tests
++ realpath basic-suite-4.1
+ export SUITE=/home/myoung/ovirt-system-tests/basic-suite-4.1
+ SUITE=/home/myoung/ovirt-system-tests/basic-suite-4.1
+ '[' -z '' ']'
+ export PREFIX=/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
+ PREFIX=/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
+ false
+ [[ -d /home/myoung/ovirt-system-tests/basic-suite-4.1 ]]
+ echo '# lago version'
# lago version


  + lago --version


lago 0.40.0
+ echo '#'
#
+ check_ram 8196
+ local recommended=8196
++ free -m
++ grep Mem


 ++ awk '{print $2}'
+ local cur_ram=7812
+ [[ 7812 -lt 8196 ]]
+ echo 'It'\''s recommended to have at least 8196MB of RAM' 'installed on
the system to run the system tests, if you find' 'issues while running
them, consider upgrading your system.' '(only detected 7812MB installed)'
It's recommended to have at least 8196MB of RAM installed on the system to
run the system tests, if you find issues while running them, consider
upgrading your system. (only detected 7812MB installed)
+ echo 'Running suite found in
/home/myoung/ovirt-system-tests/basic-suite-4.1'
Running suite found in /home/myoung/ovirt-system-tests/basic-suite-4.1
+ echo 'Environment will be deployed at
/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1'
Environment will be deployed at
/home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
+ rm -rf /home/myoung/ovirt-system-tests/deployment-basic-suite-4.1
+ export PYTHONPATH=:/home/myoung/ovirt-system-tests/basic-suite-4.1
+ PYTHONPATH=:/home/myoung/ovirt-system-tests/basic-suite-4.1
+ source /home/myoung/ovirt-system-tests/basic-suite-4.1/control.sh
+ prep_suite '' '' ''
+ local suite_name=basic-suite-4.1
+ suite_name=basic-suite-4-1
+ local engine hosts
+ source /home/myoung/ovirt-system-tests/basic-suite-4.1/templates
++ engine=el7.3-base
++ hosts=el7.3-base
+ sed -r -e s,__ENGINE__,lago-basic-suite-4-1-engine,g -e
's,__HOST([0-9]+)__,lago-basic-suite-4-1-host\1,g' -e
's,__LAGO_NET_([A-Za-z0-9]*)__,lago-basic-suite-4-1-net-\L\1,g' -e
s,__STORAGE__,lago-basic-suite-4-1-storage,g -e
s,__ENGINE_TEMPLATE__,el7.3-base,g -e s,__HOSTS_TEMPLATE__,el7.3-base,g


   + run_suite
+ env_init '' /home/myoung/ovirt-system-tests/basic-suite-4.1/LagoInitFile
+ ci_msg_if_fails env_init
+ msg_if_fails 'Failed to prepare environment on step env_init, please
contact the CI team.'
++ dirname 

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-16 Thread Sandro Bonazzola
On Fri, Jun 16, 2017 at 12:38 PM, Eyal Edri  wrote:

>
>
> On Thu, Jun 15, 2017 at 11:56 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Thu, Jun 15, 2017 at 10:39 AM, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> On Wed, Jun 14, 2017 at 7:16 PM, Rich Megginson 
>>> wrote:
>>>
 I see this:

 1.
Jun 14 12:26:01 lago-basic-suite-master-engine fluentd:
/usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could
not find 'thread_safe' (~> 0.1) among 20 total gem(s)
 (Gem::LoadError)

 Missing rubygem fluentd packages or a problem with the dependencies?
>>>
>>>
>>> I think it's an issue in the dependencies:
>>> fluentd.spec:
>>>31 : BuildRequires: rubygem(thread_safe)
>>>
>>> Missing it in Requires.
>>> Pushing a fix.
>>>
>>
>> https://review.rdoproject.org/r/7081
>>
>
> When can we expect to see it propogate in oVirt CI? I see the test is
> still failing.
> Anything we need to do on our end?
>

It's built and tagged for testing, should be arriving on oVirt CI now.
http://cbs.centos.org/koji/buildinfo?buildID=17380




>
>
>>
>>
>>
>>
>>>
>>>
>>>



 On 06/14/2017 01:07 PM, Dafna Ron wrote:

> We have var logs:
>
> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
> ster/7191/artifact/exported-artifacts/basic-suit-master-el7/
> test_logs/basic-suite-master/post-003_00_metrics_bootstrap.p
> y/lago-basic-suite-master-engine/
>
> I egreped fluentd from engine's host messages files:
>
> http://pastebin.test.redhat.com/493959
>
> On 06/14/2017 02:54 PM, Sandro Bonazzola wrote:
>
>>
>>
>> On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David > > wrote:
>>
>> On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar > > wrote:
>> > This patch fixes the chgrp but not we see:
>> >
>> > Unable to start service fluentd: Job for fluentd.service failed
>> because
>> > start of the service was attempted too often
>>
>> Adding Shirly.
>>
>> Do we collect syslog (/var/log/messages or journalctl)? If so, we
>> can try
>> to see why it fails.
>>
>>
>> Adding also Richard who may be interested on this.
>>
>>
>> >
>> >
>> > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
>> >
>> > wrote:
>> >>
>> >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>> >> >
>> wrote:
>> >> > Hello!
>> >> >
>> >> > Fluentd packages was modified yesterday there
>> >> >
>> >> >
>> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/
>> rpm/el7/noarch/
>> > /rpm/el7/noarch/>
>> >> > This repository is referenced in reposync-config.
>> >> >
>> >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test
>> with error:
>> >> >  TASK [fluentd : Ensure fluentd configuration directory
>> exists]
>> >> > * fatal: [localhost]: FAILED! => {"changed":
>> false,
>> >> > "failed": true, "gid": 0, "group": "root", "mode": "0755",
>> "msg": "chgrp
>> >> > failed: failed to look up group fluentd"
>> >> >
>> >> > And the same error on host0 and hos1.
>> >> >
>> >> > Does anyone know how to fix it?
>> >>
>> >> Should be fixed by [1]. Either wait for the repos to be
>> updated or install
>> >> the update from jenkins (link to it inside [1]).
>> >>
>> >> [1] https://gerrit.ovirt.org/#/c/78140/
>> 
>> >>
>> >> >
>> >> > Sincerely, Valentina Makarova
>> >> >
>> >> >
>> >> > ___
>> >> > Devel mailing list
>> >> > Devel@ovirt.org 
>> >> > http://lists.ovirt.org/mailman/listinfo/devel
>> 
>> >>
>> >>
>> >>
>> >> --
>> >> Didi
>> >> ___
>> >> Devel mailing list
>> >> Devel@ovirt.org 
>> >> http://lists.ovirt.org/mailman/listinfo/devel
>> 
>> >
>> >
>>
>>
>>
>> --
>> Didi
>> ___
>> Devel mailing list
>> Devel@ovirt.org 

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-16 Thread Eyal Edri
On Thu, Jun 15, 2017 at 11:56 AM, Sandro Bonazzola 
wrote:

>
>
> On Thu, Jun 15, 2017 at 10:39 AM, Sandro Bonazzola 
> wrote:
>
>>
>>
>> On Wed, Jun 14, 2017 at 7:16 PM, Rich Megginson 
>> wrote:
>>
>>> I see this:
>>>
>>> 1.
>>>Jun 14 12:26:01 lago-basic-suite-master-engine fluentd:
>>>/usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could
>>>not find 'thread_safe' (~> 0.1) among 20 total gem(s) (Gem::LoadError)
>>>
>>> Missing rubygem fluentd packages or a problem with the dependencies?
>>
>>
>> I think it's an issue in the dependencies:
>> fluentd.spec:
>>31 : BuildRequires: rubygem(thread_safe)
>>
>> Missing it in Requires.
>> Pushing a fix.
>>
>
> https://review.rdoproject.org/r/7081
>

When can we expect to see it propogate in oVirt CI? I see the test is still
failing.
Anything we need to do on our end?


>
>
>
>
>>
>>
>>
>>>
>>>
>>>
>>> On 06/14/2017 01:07 PM, Dafna Ron wrote:
>>>
 We have var logs:

 http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
 ster/7191/artifact/exported-artifacts/basic-suit-master-el7/
 test_logs/basic-suite-master/post-003_00_metrics_bootstrap.p
 y/lago-basic-suite-master-engine/

 I egreped fluentd from engine's host messages files:

 http://pastebin.test.redhat.com/493959

 On 06/14/2017 02:54 PM, Sandro Bonazzola wrote:

>
>
> On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David  > wrote:
>
> On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar  > wrote:
> > This patch fixes the chgrp but not we see:
> >
> > Unable to start service fluentd: Job for fluentd.service failed
> because
> > start of the service was attempted too often
>
> Adding Shirly.
>
> Do we collect syslog (/var/log/messages or journalctl)? If so, we
> can try
> to see why it fails.
>
>
> Adding also Richard who may be interested on this.
>
>
> >
> >
> > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
> >
> > wrote:
> >>
> >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
> >> > wrote:
> >> > Hello!
> >> >
> >> > Fluentd packages was modified yesterday there
> >> >
> >> >
> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/
> rpm/el7/noarch/
>  /rpm/el7/noarch/>
> >> > This repository is referenced in reposync-config.
> >> >
> >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test
> with error:
> >> >  TASK [fluentd : Ensure fluentd configuration directory
> exists]
> >> > * fatal: [localhost]: FAILED! => {"changed":
> false,
> >> > "failed": true, "gid": 0, "group": "root", "mode": "0755",
> "msg": "chgrp
> >> > failed: failed to look up group fluentd"
> >> >
> >> > And the same error on host0 and hos1.
> >> >
> >> > Does anyone know how to fix it?
> >>
> >> Should be fixed by [1]. Either wait for the repos to be
> updated or install
> >> the update from jenkins (link to it inside [1]).
> >>
> >> [1] https://gerrit.ovirt.org/#/c/78140/
> 
> >>
> >> >
> >> > Sincerely, Valentina Makarova
> >> >
> >> >
> >> > ___
> >> > Devel mailing list
> >> > Devel@ovirt.org 
> >> > http://lists.ovirt.org/mailman/listinfo/devel
> 
> >>
> >>
> >>
> >> --
> >> Didi
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org 
> >> http://lists.ovirt.org/mailman/listinfo/devel
> 
> >
> >
>
>
>
> --
> Didi
> ___
> Devel mailing list
> Devel@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/devel
> 
>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
>
> 
> TRIED. TESTED. TRUSTED. 

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-15 Thread Sandro Bonazzola
On Thu, Jun 15, 2017 at 10:39 AM, Sandro Bonazzola 
wrote:

>
>
> On Wed, Jun 14, 2017 at 7:16 PM, Rich Megginson 
> wrote:
>
>> I see this:
>>
>> 1.
>>Jun 14 12:26:01 lago-basic-suite-master-engine fluentd:
>>/usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could
>>not find 'thread_safe' (~> 0.1) among 20 total gem(s) (Gem::LoadError)
>>
>> Missing rubygem fluentd packages or a problem with the dependencies?
>
>
> I think it's an issue in the dependencies:
> fluentd.spec:
>31 : BuildRequires: rubygem(thread_safe)
>
> Missing it in Requires.
> Pushing a fix.
>

https://review.rdoproject.org/r/7081



>
>
>
>>
>>
>>
>> On 06/14/2017 01:07 PM, Dafna Ron wrote:
>>
>>> We have var logs:
>>>
>>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>>> ster/7191/artifact/exported-artifacts/basic-suit-master-el7/
>>> test_logs/basic-suite-master/post-003_00_metrics_bootstrap.p
>>> y/lago-basic-suite-master-engine/
>>>
>>> I egreped fluentd from engine's host messages files:
>>>
>>> http://pastebin.test.redhat.com/493959
>>>
>>> On 06/14/2017 02:54 PM, Sandro Bonazzola wrote:
>>>


 On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David > wrote:

 On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar > wrote:
 > This patch fixes the chgrp but not we see:
 >
 > Unable to start service fluentd: Job for fluentd.service failed
 because
 > start of the service was attempted too often

 Adding Shirly.

 Do we collect syslog (/var/log/messages or journalctl)? If so, we
 can try
 to see why it fails.


 Adding also Richard who may be interested on this.


 >
 >
 > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
 >
 > wrote:
 >>
 >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
 >> > wrote:
 >> > Hello!
 >> >
 >> > Fluentd packages was modified yesterday there
 >> >
 >> >
 http://resources.ovirt.org/pub/ovirt-master-snapshot-static/
 rpm/el7/noarch/
 
 >> > This repository is referenced in reposync-config.
 >> >
 >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test
 with error:
 >> >  TASK [fluentd : Ensure fluentd configuration directory exists]
 >> > * fatal: [localhost]: FAILED! => {"changed":
 false,
 >> > "failed": true, "gid": 0, "group": "root", "mode": "0755",
 "msg": "chgrp
 >> > failed: failed to look up group fluentd"
 >> >
 >> > And the same error on host0 and hos1.
 >> >
 >> > Does anyone know how to fix it?
 >>
 >> Should be fixed by [1]. Either wait for the repos to be
 updated or install
 >> the update from jenkins (link to it inside [1]).
 >>
 >> [1] https://gerrit.ovirt.org/#/c/78140/
 
 >>
 >> >
 >> > Sincerely, Valentina Makarova
 >> >
 >> >
 >> > ___
 >> > Devel mailing list
 >> > Devel@ovirt.org 
 >> > http://lists.ovirt.org/mailman/listinfo/devel
 
 >>
 >>
 >>
 >> --
 >> Didi
 >> ___
 >> Devel mailing list
 >> Devel@ovirt.org 
 >> http://lists.ovirt.org/mailman/listinfo/devel
 
 >
 >



 --
 Didi
 ___
 Devel mailing list
 Devel@ovirt.org 
 http://lists.ovirt.org/mailman/listinfo/devel
 




 --

 SANDRO BONAZZOLA

 ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

 Red Hat EMEA 

 
 TRIED. TESTED. TRUSTED. 



 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

>>>
>>>
>>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> 

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-15 Thread Sandro Bonazzola
On Wed, Jun 14, 2017 at 7:16 PM, Rich Megginson  wrote:

> I see this:
>
> 1.
>Jun 14 12:26:01 lago-basic-suite-master-engine fluentd:
>/usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could
>not find 'thread_safe' (~> 0.1) among 20 total gem(s) (Gem::LoadError)
>
> Missing rubygem fluentd packages or a problem with the dependencies?


I think it's an issue in the dependencies:
fluentd.spec:
   31 : BuildRequires: rubygem(thread_safe)

Missing it in Requires.
Pushing a fix.



>
>
>
> On 06/14/2017 01:07 PM, Dafna Ron wrote:
>
>> We have var logs:
>>
>> http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_ma
>> ster/7191/artifact/exported-artifacts/basic-suit-master-el7/
>> test_logs/basic-suite-master/post-003_00_metrics_bootstrap.
>> py/lago-basic-suite-master-engine/
>>
>> I egreped fluentd from engine's host messages files:
>>
>> http://pastebin.test.redhat.com/493959
>>
>> On 06/14/2017 02:54 PM, Sandro Bonazzola wrote:
>>
>>>
>>>
>>> On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David >> > wrote:
>>>
>>> On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar >> > wrote:
>>> > This patch fixes the chgrp but not we see:
>>> >
>>> > Unable to start service fluentd: Job for fluentd.service failed
>>> because
>>> > start of the service was attempted too often
>>>
>>> Adding Shirly.
>>>
>>> Do we collect syslog (/var/log/messages or journalctl)? If so, we
>>> can try
>>> to see why it fails.
>>>
>>>
>>> Adding also Richard who may be interested on this.
>>>
>>>
>>> >
>>> >
>>> > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
>>> >
>>> > wrote:
>>> >>
>>> >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>>> >> > wrote:
>>> >> > Hello!
>>> >> >
>>> >> > Fluentd packages was modified yesterday there
>>> >> >
>>> >> >
>>> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/
>>> rpm/el7/noarch/
>>> >> /rpm/el7/noarch/>
>>> >> > This repository is referenced in reposync-config.
>>> >> >
>>> >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test
>>> with error:
>>> >> >  TASK [fluentd : Ensure fluentd configuration directory exists]
>>> >> > * fatal: [localhost]: FAILED! => {"changed":
>>> false,
>>> >> > "failed": true, "gid": 0, "group": "root", "mode": "0755",
>>> "msg": "chgrp
>>> >> > failed: failed to look up group fluentd"
>>> >> >
>>> >> > And the same error on host0 and hos1.
>>> >> >
>>> >> > Does anyone know how to fix it?
>>> >>
>>> >> Should be fixed by [1]. Either wait for the repos to be
>>> updated or install
>>> >> the update from jenkins (link to it inside [1]).
>>> >>
>>> >> [1] https://gerrit.ovirt.org/#/c/78140/
>>> 
>>> >>
>>> >> >
>>> >> > Sincerely, Valentina Makarova
>>> >> >
>>> >> >
>>> >> > ___
>>> >> > Devel mailing list
>>> >> > Devel@ovirt.org 
>>> >> > http://lists.ovirt.org/mailman/listinfo/devel
>>> 
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Didi
>>> >> ___
>>> >> Devel mailing list
>>> >> Devel@ovirt.org 
>>> >> http://lists.ovirt.org/mailman/listinfo/devel
>>> 
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Didi
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>> 
>>>
>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>>>
>>> Red Hat EMEA 
>>>
>>> 
>>> TRIED. TESTED. TRUSTED. 
>>>
>>>
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-15 Thread Rich Megginson

On 06/14/2017 11:42 AM, Eyal Edri wrote:

Adding infra as well.
Currently, master OST is still failing on this.

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7189/testReport/

On Wed, Jun 14, 2017 at 4:54 PM, Sandro Bonazzola > wrote:




On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David
> wrote:

On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar
> wrote:
> This patch fixes the chgrp but not we see:
>
> Unable to start service fluentd: Job for fluentd.service
failed because
> start of the service was attempted too often

Adding Shirly.

Do we collect syslog (/var/log/messages or journalctl)? If so,
we can try
to see why it fails.


Adding also Richard who may be interested on this.



Yes.   If you run fluentd as a systemd service fluentd.service, then you 
can use journalctl -u fluentd to see the logs.




>
>
> On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
>
> wrote:
>>
>> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>> > wrote:
>> > Hello!
>> >
>> > Fluentd packages was modified yesterday there
>> >
>> >

http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/


>> > This repository is referenced in reposync-config.
>> >
>> > And now run_suile.sh failed in 003_00_metrics_bootstrap
test with error:
>> >  TASK [fluentd : Ensure fluentd configuration directory
exists]
>> > * fatal: [localhost]: FAILED! =>
{"changed": false,
>> > "failed": true, "gid": 0, "group": "root", "mode":
"0755", "msg": "chgrp
>> > failed: failed to look up group fluentd"
>> >
>> > And the same error on host0 and hos1.
>> >
>> > Does anyone know how to fix it?
>>
>> Should be fixed by [1]. Either wait for the repos to be
updated or install
>> the update from jenkins (link to it inside [1]).
>>
>> [1] https://gerrit.ovirt.org/#/c/78140/

>>
>> >
>> > Sincerely, Valentina Makarova
>> >
>> >
>> > ___
>> > Devel mailing list
>> > Devel@ovirt.org 
>> > http://lists.ovirt.org/mailman/listinfo/devel

>>
>>
>>
>> --
>> Didi
>> ___
>> Devel mailing list
>> Devel@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/devel

>
>



--
Didi
___
Devel mailing list
Devel@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/devel





-- 


SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 


___
Devel mailing list
Devel@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/devel





--

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 

 	TRIED. TESTED. TRUSTED. 



phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)



___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-15 Thread Rich Megginson

I see this:

1.
   Jun 14 12:26:01 lago-basic-suite-master-engine fluentd:
   /usr/share/rubygems/rubygems/dependency.rb:296:in `to_specs': Could
   not find 'thread_safe' (~> 0.1) among 20 total gem(s) (Gem::LoadError)

Missing rubygem fluentd packages or a problem with the dependencies?


On 06/14/2017 01:07 PM, Dafna Ron wrote:

We have var logs:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7191/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-003_00_metrics_bootstrap.py/lago-basic-suite-master-engine/

I egreped fluentd from engine's host messages files:

http://pastebin.test.redhat.com/493959

On 06/14/2017 02:54 PM, Sandro Bonazzola wrote:



On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David > wrote:


On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar > wrote:
> This patch fixes the chgrp but not we see:
>
> Unable to start service fluentd: Job for fluentd.service failed
because
> start of the service was attempted too often

Adding Shirly.

Do we collect syslog (/var/log/messages or journalctl)? If so, we
can try
to see why it fails.


Adding also Richard who may be interested on this.


>
>
> On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
>
> wrote:
>>
>> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>> > wrote:
>> > Hello!
>> >
>> > Fluentd packages was modified yesterday there
>> >
>> >
http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/


>> > This repository is referenced in reposync-config.
>> >
>> > And now run_suile.sh failed in 003_00_metrics_bootstrap test
with error:
>> >  TASK [fluentd : Ensure fluentd configuration directory exists]
>> > * fatal: [localhost]: FAILED! => {"changed":
false,
>> > "failed": true, "gid": 0, "group": "root", "mode": "0755",
"msg": "chgrp
>> > failed: failed to look up group fluentd"
>> >
>> > And the same error on host0 and hos1.
>> >
>> > Does anyone know how to fix it?
>>
>> Should be fixed by [1]. Either wait for the repos to be
updated or install
>> the update from jenkins (link to it inside [1]).
>>
>> [1] https://gerrit.ovirt.org/#/c/78140/

>>
>> >
>> > Sincerely, Valentina Makarova
>> >
>> >
>> > ___
>> > Devel mailing list
>> > Devel@ovirt.org 
>> > http://lists.ovirt.org/mailman/listinfo/devel

>>
>>
>>
>> --
>> Didi
>> ___
>> Devel mailing list
>> Devel@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/devel

>
>



--
Didi
___
Devel mailing list
Devel@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/devel





--

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

  
TRIED. TESTED. TRUSTED. 



___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel





___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Dafna Ron
We have var logs:

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7191/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-003_00_metrics_bootstrap.py/lago-basic-suite-master-engine/

I egreped fluentd from engine's host messages files:

http://pastebin.test.redhat.com/493959

On 06/14/2017 02:54 PM, Sandro Bonazzola wrote:
>
>
> On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David  > wrote:
>
> On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar  > wrote:
> > This patch fixes the chgrp but not we see:
> >
> > Unable to start service fluentd: Job for fluentd.service failed
> because
> > start of the service was attempted too often
>
> Adding Shirly.
>
> Do we collect syslog (/var/log/messages or journalctl)? If so, we
> can try
> to see why it fails.
>
>
> Adding also Richard who may be interested on this.
>  
>
>
> >
> >
> > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David
> >
> > wrote:
> >>
> >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
> >> > wrote:
> >> > Hello!
> >> >
> >> > Fluentd packages was modified yesterday there
> >> >
> >> >
> 
> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/
> 
> 
> >> > This repository is referenced in reposync-config.
> >> >
> >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test
> with error:
> >> >  TASK [fluentd : Ensure fluentd configuration directory exists]
> >> > * fatal: [localhost]: FAILED! => {"changed":
> false,
> >> > "failed": true, "gid": 0, "group": "root", "mode": "0755",
> "msg": "chgrp
> >> > failed: failed to look up group fluentd"
> >> >
> >> > And the same error on host0 and hos1.
> >> >
> >> > Does anyone know how to fix it?
> >>
> >> Should be fixed by [1]. Either wait for the repos to be updated
> or install
> >> the update from jenkins (link to it inside [1]).
> >>
> >> [1] https://gerrit.ovirt.org/#/c/78140/
> 
> >>
> >> >
> >> > Sincerely, Valentina Makarova
> >> >
> >> >
> >> > ___
> >> > Devel mailing list
> >> > Devel@ovirt.org 
> >> > http://lists.ovirt.org/mailman/listinfo/devel
> 
> >>
> >>
> >>
> >> --
> >> Didi
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org 
> >> http://lists.ovirt.org/mailman/listinfo/devel
> 
> >
> >
>
>
>
> --
> Didi
> ___
> Devel mailing list
> Devel@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/devel
> 
>
>
>
>
> -- 
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
>
>   
> TRIED. TESTED. TRUSTED. 
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Eyal Edri
Adding infra as well.
Currently, master OST is still failing on this.

http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/7189/testReport/

On Wed, Jun 14, 2017 at 4:54 PM, Sandro Bonazzola 
wrote:

>
>
> On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David 
> wrote:
>
>> On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar  wrote:
>> > This patch fixes the chgrp but not we see:
>> >
>> > Unable to start service fluentd: Job for fluentd.service failed because
>> > start of the service was attempted too often
>>
>> Adding Shirly.
>>
>> Do we collect syslog (/var/log/messages or journalctl)? If so, we can try
>> to see why it fails.
>>
>
> Adding also Richard who may be interested on this.
>
>
>>
>> >
>> >
>> > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David 
>> > wrote:
>> >>
>> >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>> >>  wrote:
>> >> > Hello!
>> >> >
>> >> > Fluentd packages was modified yesterday there
>> >> >
>> >> > http://resources.ovirt.org/pub/ovirt-master-snapshot-static/
>> rpm/el7/noarch/
>> >> > This repository is referenced in reposync-config.
>> >> >
>> >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test with
>> error:
>> >> >  TASK [fluentd : Ensure fluentd configuration directory exists]
>> >> > * fatal: [localhost]: FAILED! => {"changed": false,
>> >> > "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg":
>> "chgrp
>> >> > failed: failed to look up group fluentd"
>> >> >
>> >> > And the same error on host0 and hos1.
>> >> >
>> >> > Does anyone know how to fix it?
>> >>
>> >> Should be fixed by [1]. Either wait for the repos to be updated or
>> install
>> >> the update from jenkins (link to it inside [1]).
>> >>
>> >> [1] https://gerrit.ovirt.org/#/c/78140/
>> >>
>> >> >
>> >> > Sincerely, Valentina Makarova
>> >> >
>> >> >
>> >> > ___
>> >> > Devel mailing list
>> >> > Devel@ovirt.org
>> >> > http://lists.ovirt.org/mailman/listinfo/devel
>> >>
>> >>
>> >>
>> >> --
>> >> Didi
>> >> ___
>> >> Devel mailing list
>> >> Devel@ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/devel
>> >
>> >
>>
>>
>>
>> --
>> Didi
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R
>
> Red Hat EMEA 
> 
> TRIED. TESTED. TRUSTED. 
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


ASSOCIATE MANAGER

RHV DevOps

EMEA VIRTUALIZATION R


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Sandro Bonazzola
On Wed, Jun 14, 2017 at 3:50 PM, Yedidyah Bar David  wrote:

> On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar  wrote:
> > This patch fixes the chgrp but not we see:
> >
> > Unable to start service fluentd: Job for fluentd.service failed because
> > start of the service was attempted too often
>
> Adding Shirly.
>
> Do we collect syslog (/var/log/messages or journalctl)? If so, we can try
> to see why it fails.
>

Adding also Richard who may be interested on this.


>
> >
> >
> > On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David 
> > wrote:
> >>
> >> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
> >>  wrote:
> >> > Hello!
> >> >
> >> > Fluentd packages was modified yesterday there
> >> >
> >> > http://resources.ovirt.org/pub/ovirt-master-snapshot-
> static/rpm/el7/noarch/
> >> > This repository is referenced in reposync-config.
> >> >
> >> > And now run_suile.sh failed in 003_00_metrics_bootstrap test with
> error:
> >> >  TASK [fluentd : Ensure fluentd configuration directory exists]
> >> > * fatal: [localhost]: FAILED! => {"changed": false,
> >> > "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg":
> "chgrp
> >> > failed: failed to look up group fluentd"
> >> >
> >> > And the same error on host0 and hos1.
> >> >
> >> > Does anyone know how to fix it?
> >>
> >> Should be fixed by [1]. Either wait for the repos to be updated or
> install
> >> the update from jenkins (link to it inside [1]).
> >>
> >> [1] https://gerrit.ovirt.org/#/c/78140/
> >>
> >> >
> >> > Sincerely, Valentina Makarova
> >> >
> >> >
> >> > ___
> >> > Devel mailing list
> >> > Devel@ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/devel
> >>
> >>
> >>
> >> --
> >> Didi
> >> ___
> >> Devel mailing list
> >> Devel@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
>
>
>
> --
> Didi
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Yedidyah Bar David
On Wed, Jun 14, 2017 at 4:44 PM, Gil Shinar  wrote:
> This patch fixes the chgrp but not we see:
>
> Unable to start service fluentd: Job for fluentd.service failed because
> start of the service was attempted too often

Adding Shirly.

Do we collect syslog (/var/log/messages or journalctl)? If so, we can try
to see why it fails.

>
>
> On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David 
> wrote:
>>
>> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>>  wrote:
>> > Hello!
>> >
>> > Fluentd packages was modified yesterday there
>> >
>> > http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/
>> > This repository is referenced in reposync-config.
>> >
>> > And now run_suile.sh failed in 003_00_metrics_bootstrap test with error:
>> >  TASK [fluentd : Ensure fluentd configuration directory exists]
>> > * fatal: [localhost]: FAILED! => {"changed": false,
>> > "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "chgrp
>> > failed: failed to look up group fluentd"
>> >
>> > And the same error on host0 and hos1.
>> >
>> > Does anyone know how to fix it?
>>
>> Should be fixed by [1]. Either wait for the repos to be updated or install
>> the update from jenkins (link to it inside [1]).
>>
>> [1] https://gerrit.ovirt.org/#/c/78140/
>>
>> >
>> > Sincerely, Valentina Makarova
>> >
>> >
>> > ___
>> > Devel mailing list
>> > Devel@ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>>
>> --
>> Didi
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>



-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Gil Shinar
This patch fixes the chgrp but not we see:

Unable to start service fluentd: Job for fluentd.service failed
because start of the service was attempted too often


On Wed, Jun 14, 2017 at 12:29 PM, Yedidyah Bar David 
wrote:

> On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
>  wrote:
> > Hello!
> >
> > Fluentd packages was modified yesterday there
> > http://resources.ovirt.org/pub/ovirt-master-snapshot-
> static/rpm/el7/noarch/
> > This repository is referenced in reposync-config.
> >
> > And now run_suile.sh failed in 003_00_metrics_bootstrap test with error:
> >  TASK [fluentd : Ensure fluentd configuration directory exists]
> > * fatal: [localhost]: FAILED! => {"changed": false,
> > "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "chgrp
> > failed: failed to look up group fluentd"
> >
> > And the same error on host0 and hos1.
> >
> > Does anyone know how to fix it?
>
> Should be fixed by [1]. Either wait for the repos to be updated or install
> the update from jenkins (link to it inside [1]).
>
> [1] https://gerrit.ovirt.org/#/c/78140/
>
> >
> > Sincerely, Valentina Makarova
> >
> >
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> --
> Didi
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Yedidyah Bar David
On Wed, Jun 14, 2017 at 11:34 AM, Valentina Makarova
 wrote:
> Hello!
>
> Fluentd packages was modified yesterday there
> http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/
> This repository is referenced in reposync-config.
>
> And now run_suile.sh failed in 003_00_metrics_bootstrap test with error:
>  TASK [fluentd : Ensure fluentd configuration directory exists]
> * fatal: [localhost]: FAILED! => {"changed": false,
> "failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "chgrp
> failed: failed to look up group fluentd"
>
> And the same error on host0 and hos1.
>
> Does anyone know how to fix it?

Should be fixed by [1]. Either wait for the repos to be updated or install
the update from jenkins (link to it inside [1]).

[1] https://gerrit.ovirt.org/#/c/78140/

>
> Sincerely, Valentina Makarova
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel



-- 
Didi
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


[ovirt-devel] [ovirt-system-tests] Failed after fluentd rpm ubdate

2017-06-14 Thread Valentina Makarova
Hello!

Fluentd packages was modified yesterday there
http://resources.ovirt.org/pub/ovirt-master-snapshot-static/rpm/el7/noarch/
This repository is referenced in reposync-config.

And now run_suile.sh failed in 003_00_metrics_bootstrap test with error:
 TASK [fluentd : Ensure fluentd configuration directory exists]
* fatal: [localhost]: FAILED! => {"changed": false,
"failed": true, "gid": 0, "group": "root", "mode": "0755", "msg": "chgrp
failed: failed to look up group fluentd"

And the same error on host0 and hos1.

Does anyone know how to fix it?

Sincerely, Valentina Makarova
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-06-06 Thread Yaniv Kaul
On Tue, Jun 6, 2017 at 11:26 AM, Milan Zamazal  wrote:

> Valentina Makarova  writes:
>
> > I may use a pakamiko only like this :
> > https://github.com/vmakarova/ovirt-system-tests/commit/
> > 3e9e5ce697da7e0567aaad8397fa886469bc5ae3
> > Lago also use pakamiko, so there is not new dependencies.
> >
> > Is this a good way?
>
> If we don't have higher level means to access VMs via ssh in
> Lago/Ovirt-System-Tests (do we?) then I'd say using paramiko is fine.
>
> > And few more questions about tests and ovirt network.
> >
> > 1) This address of vm0 192.168.201.213. What is this addres? Why it is
> > ip of 'vm0'?  What gave it to vm0 and how the user may to understand
> > that the address would be that?  There is not this ip in virsh
> > net-list and net-dumpxml
>
> It's probably obtained from DHCP, it's in the DHCP range specified in
> LagoInitFile (*.100-*.254).
>
> > 2) In test vm_run (004_basic_sanity) we configure interface eth0 on
> > vm0 with ip 192.168.200.200.  But this interface is unreacheble from
> > engine-host.And when I ask vm0 'ip address' via ssh, there is not
> > interface eth1 there. What for does test add it, if it does not work?
> > Should it work?  And why test finished successful if eth0 was not
> > configured according a start_params?
>
> Good questions.  The complex setup in vm_run looks somewhat mysterious.
> But we should be able to use the Yaniv's suggestion:
>
> > 2017-05-31 23:06 GMT+03:00 Yaniv Kaul :
>
> [...]
>
> >> We can add a fake entry in Lago init file just as we do for
> hosted-engine.
> >> Most importantly, it'll create a MAC to IP address mapping in libvirt's
> >> DHCP.
> >> Of course, then we need to use this MAC.
>
> If I understand it correctly:
>
> - We can create a fake VM in LagoInitFile.  How do we specify it's fake,
>   is it the vm-provider entry?
>

Not sure, but here's the example of hosted-engine VM:
  __ENGINE__:
vm-provider: ssh
vm-type: ovirt-engine
distro: el7
service_provider: systemd
ssh-password: 123456
nics:
  - net: __LAGO_NET__
ip: 192.168.200.99
metadata:
  ovirt-engine-password: 123



>
> - If we assign IP to that VM, the mapping for DHCP is automatically
>   created, and the MAC address is determined by lago.utils.ipv4_to_mac.
>

Or use static IP mapping - see above.


>
> - Then we can specify `nics' parameter to params.VM constructor or to
>   add a NIC to the VM as in add_nic test (but I'm not sure sshd would be
>   available on it in that case).
>

I'd hope so...
Y.


>
> Thanks,
> Milan
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-06-06 Thread Milan Zamazal
Valentina Makarova  writes:

> I may use a pakamiko only like this :
> https://github.com/vmakarova/ovirt-system-tests/commit/
> 3e9e5ce697da7e0567aaad8397fa886469bc5ae3
> Lago also use pakamiko, so there is not new dependencies.
>
> Is this a good way?

If we don't have higher level means to access VMs via ssh in
Lago/Ovirt-System-Tests (do we?) then I'd say using paramiko is fine.

> And few more questions about tests and ovirt network.
>
> 1) This address of vm0 192.168.201.213. What is this addres? Why it is
> ip of 'vm0'?  What gave it to vm0 and how the user may to understand
> that the address would be that?  There is not this ip in virsh
> net-list and net-dumpxml

It's probably obtained from DHCP, it's in the DHCP range specified in
LagoInitFile (*.100-*.254).

> 2) In test vm_run (004_basic_sanity) we configure interface eth0 on
> vm0 with ip 192.168.200.200.  But this interface is unreacheble from
> engine-host.And when I ask vm0 'ip address' via ssh, there is not
> interface eth1 there. What for does test add it, if it does not work?
> Should it work?  And why test finished successful if eth0 was not
> configured according a start_params?

Good questions.  The complex setup in vm_run looks somewhat mysterious.
But we should be able to use the Yaniv's suggestion:

> 2017-05-31 23:06 GMT+03:00 Yaniv Kaul :

[...]

>> We can add a fake entry in Lago init file just as we do for hosted-engine.
>> Most importantly, it'll create a MAC to IP address mapping in libvirt's
>> DHCP.
>> Of course, then we need to use this MAC.

If I understand it correctly:

- We can create a fake VM in LagoInitFile.  How do we specify it's fake,
  is it the vm-provider entry?

- If we assign IP to that VM, the mapping for DHCP is automatically
  created, and the MAC address is determined by lago.utils.ipv4_to_mac.

- Then we can specify `nics' parameter to params.VM constructor or to
  add a NIC to the VM as in add_nic test (but I'm not sure sshd would be
  available on it in that case).

Thanks,
Milan
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-06-06 Thread Valentina Makarova
Thank you all for your advice!

Now I can ping 192.168.201.213 succesfully, I can get SPICE-connection.
And I want to know your opinion about way to get connection via some test.

I may use a pakamiko only like this :
https://github.com/vmakarova/ovirt-system-tests/commit/
3e9e5ce697da7e0567aaad8397fa886469bc5ae3
Lago also use pakamiko, so there is not new dependencies.

Is this a good way?

Or, may something using lago be better?
For example using lago.ssh.get_ssh_client() as a wrapper like this:
https://github.com/vmakarova/ovirt-system-tests/commit/
855dbdf379730d55daf51851052d71daa45d9a09

Or Is there better way to do this?


And few more questions about tests and ovirt network.

1) This address of vm0 192.168.201.213. What is this addres? Why it is ip
of 'vm0'?
What gave it to vm0 and how the user may to understand that the address
would be that?
There is not this ip in  virsh net-list  and net-dumpxml

2) In test vm_run (004_basic_sanity) we configure interface eth0 on vm0
with ip 192.168.200.200.
But this interface is unreacheble from engine-host.And when I ask vm0 'ip
address' via ssh,
there is not interface eth1 there. What for does test add it, if it does
not work? Should it work?
And why test finished successful if eth0 was not configured according a
start_params?


Sincerely,
Valentina Makarova

2017-05-31 23:06 GMT+03:00 Yaniv Kaul :

>
>
> On Wed, May 31, 2017 at 11:10 AM, Milan Zamazal 
> wrote:
>
>> Nadav Goldin  writes:
>>
>> > I would start first by making sure the VM is booting properly using
>> > SPICE from the GUI, when the tests ends, you should be able to log
>> > into the Engine GUI(run 'lago ovirt status' inside your deployment
>> > directory to get the link, the directory should be something like
>> > ovirt-system-tests/deployment-SUITE-NAME). Then in the GUI, start the
>> > VM and click on 'console', the username/password should be
>> > root/123456.
>>
>> It's actually "cirros"/"cubswin:)".
>>
>> > If it is booting properly - first would be to check if it even gets an
>> > IP(my guess is not - I'm not even sure if dhcp is running in that
>> > layer, maybe we should setup one..).
>>
>> Yes, it should get everything, it's possible to log in there via ssh on
>> my setup.
>>
>> Valentina Makarova  writes:
>>
>> > And a second question is about ssh to this vm connection from my
>> laptop's
>> > console.
>> > According run_vm test from  004_basic_sanity.py (
>> > https://github.com/oVirt/ovirt-system-tests/blob/master/
>> basic-suite-master/test-scenarios/004_basic_sanity.py#L385
>> > ) vm0 should contain interface with ip 192.168.200.200, but ping this
>> from
>> > engine (192.168.200.4) Destination Host Unreachable. And 'ping vm0'
>> defined
>> > vm0 as 192.168.201.213 and also couldn't reach vm0.
>>
>> 192.168.201.213 should be working, I can both ping and ssh the
>> corresponding IP on my computer.  Do you run Lago directly on your
>> computer or in a VM?  If you run it on your computer, can't it conflict
>> with another network in your environment?  Maybe checking from inside
>> the VM as described by Nadav can help.
>>
>
> We can add a fake entry in Lago init file just as we do for hosted-engine.
> Most importantly, it'll create a MAC to IP address mapping in libvirt's
> DHCP.
> Of course, then we need to use this MAC.
> I somehow remember doing this at some point but never completing the task
> :(
> Y.
>
>
>>
>> > And in webadmin (https://engine/ovirt-engine/webadmin/#vms) IP Address
>> > field is empty.
>>
>> This is OK, I think ovirt-guest-agent must be installed and running in
>> the VM to report the IP address.  If it is not then the information is
>> missing in the UI but that doesn't mean the network is not working.
>>
>> Regards,
>> Milan
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-05-31 Thread Yaniv Kaul
On Wed, May 31, 2017 at 11:10 AM, Milan Zamazal  wrote:

> Nadav Goldin  writes:
>
> > I would start first by making sure the VM is booting properly using
> > SPICE from the GUI, when the tests ends, you should be able to log
> > into the Engine GUI(run 'lago ovirt status' inside your deployment
> > directory to get the link, the directory should be something like
> > ovirt-system-tests/deployment-SUITE-NAME). Then in the GUI, start the
> > VM and click on 'console', the username/password should be
> > root/123456.
>
> It's actually "cirros"/"cubswin:)".
>
> > If it is booting properly - first would be to check if it even gets an
> > IP(my guess is not - I'm not even sure if dhcp is running in that
> > layer, maybe we should setup one..).
>
> Yes, it should get everything, it's possible to log in there via ssh on
> my setup.
>
> Valentina Makarova  writes:
>
> > And a second question is about ssh to this vm connection from my laptop's
> > console.
> > According run_vm test from  004_basic_sanity.py (
> > https://github.com/oVirt/ovirt-system-tests/blob/
> master/basic-suite-master/test-scenarios/004_basic_sanity.py#L385
> > ) vm0 should contain interface with ip 192.168.200.200, but ping this
> from
> > engine (192.168.200.4) Destination Host Unreachable. And 'ping vm0'
> defined
> > vm0 as 192.168.201.213 and also couldn't reach vm0.
>
> 192.168.201.213 should be working, I can both ping and ssh the
> corresponding IP on my computer.  Do you run Lago directly on your
> computer or in a VM?  If you run it on your computer, can't it conflict
> with another network in your environment?  Maybe checking from inside
> the VM as described by Nadav can help.
>

We can add a fake entry in Lago init file just as we do for hosted-engine.
Most importantly, it'll create a MAC to IP address mapping in libvirt's
DHCP.
Of course, then we need to use this MAC.
I somehow remember doing this at some point but never completing the task :(
Y.


>
> > And in webadmin (https://engine/ovirt-engine/webadmin/#vms) IP Address
> > field is empty.
>
> This is OK, I think ovirt-guest-agent must be installed and running in
> the VM to report the IP address.  If it is not then the information is
> missing in the UI but that doesn't mean the network is not working.
>
> Regards,
> Milan
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-05-31 Thread Milan Zamazal
Nadav Goldin  writes:

> I would start first by making sure the VM is booting properly using
> SPICE from the GUI, when the tests ends, you should be able to log
> into the Engine GUI(run 'lago ovirt status' inside your deployment
> directory to get the link, the directory should be something like
> ovirt-system-tests/deployment-SUITE-NAME). Then in the GUI, start the
> VM and click on 'console', the username/password should be
> root/123456.

It's actually "cirros"/"cubswin:)".

> If it is booting properly - first would be to check if it even gets an
> IP(my guess is not - I'm not even sure if dhcp is running in that
> layer, maybe we should setup one..).

Yes, it should get everything, it's possible to log in there via ssh on
my setup.

Valentina Makarova  writes:

> And a second question is about ssh to this vm connection from my laptop's
> console.
> According run_vm test from  004_basic_sanity.py (
> https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test-scenarios/004_basic_sanity.py#L385
> ) vm0 should contain interface with ip 192.168.200.200, but ping this from
> engine (192.168.200.4) Destination Host Unreachable. And 'ping vm0' defined
> vm0 as 192.168.201.213 and also couldn't reach vm0.

192.168.201.213 should be working, I can both ping and ssh the
corresponding IP on my computer.  Do you run Lago directly on your
computer or in a VM?  If you run it on your computer, can't it conflict
with another network in your environment?  Maybe checking from inside
the VM as described by Nadav can help.

> And in webadmin (https://engine/ovirt-engine/webadmin/#vms) IP Address
> field is empty.

This is OK, I think ovirt-guest-agent must be installed and running in
the VM to report the IP address.  If it is not then the information is
missing in the UI but that doesn't mean the network is not working.

Regards,
Milan
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-05-31 Thread Barak Korren
On 31 May 2017 at 09:25, Nadav Goldin  wrote:
> Hi,
>
>
> On Wed, May 31, 2017 at 12:53 AM, Valentina Makarova
>  wrote:
>>
>> Is it possible to get ssh connection to non-host VM in LAGO?
>
> Not at the moment, Lago is not really aware of the nested VMs, just
> the first layer(engine + hosts).
>

Nadav didn't we put something like this in place for the HE tests?


-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] [ovirt-system-tests] ssh to an oVirt VM in Lago

2017-05-31 Thread Nadav Goldin
Hi,


On Wed, May 31, 2017 at 12:53 AM, Valentina Makarova
 wrote:
>
> Is it possible to get ssh connection to non-host VM in LAGO?

Not at the moment, Lago is not really aware of the nested VMs, just
the first layer(engine + hosts).

> It is easy for host vm and engine, there is method ''ssh" in 
> ovirt-engine-api-model.

This is actually A Lago method, not 'ovirt-engine-sdk' one.

> Please give me advice, can Iget connection to vm0 in a similar way?

I would start first by making sure the VM is booting properly using
SPICE from the GUI, when the tests ends, you should be able to log
into the Engine GUI(run 'lago ovirt status' inside your deployment
directory to get the link, the directory should be something like
ovirt-system-tests/deployment-SUITE-NAME).
Then in the GUI, start the VM and click on 'console', the
username/password should be root/123456. The image used is CirrOS.

If it is booting properly - first would be to check if it even gets an
IP(my guess is not - I'm not even sure if dhcp is running in that
layer, maybe we should setup one..).
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] ovirt-system-tests 4.1 is failling due to VmDisksResource class

2017-01-12 Thread Eyal Edri
On Thu, Jan 12, 2017 at 10:35 AM, Yaniv Kaul  wrote:

>
>
> On Thu, Jan 12, 2017 at 10:25 AM, Daniel Belenky 
> wrote:
>
>> Hi all,
>>
>> ovirt-system-tests are failing with the following error:
>>
>> ERROR [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-1) [] 
>> Can't find relative path for class 
>> "org.ovirt.engine.api.resource.VmDisksResource", will return null
>>
>>
> This is a known issue ( https://bugzilla.redhat.com/
> show_bug.cgi?id=1410038 ) and is not a cause for failure.
>
>
>> The error began at 4/1.
>>
>
> This is really outdated. How did work yesterday?
>


The system tests are running on a different repo, not latest experimental,
so they are testing ovirt-4.1-pre, so if there are known issues there they
won't get fixed until
we refresh oVirt 4.1.

Daniel - I think we should ignore this job until we refresh oVirt, lets
focus on experimental flows which run on latest code.


>
>
>> can someone take a look please?
>>
>
> What is the test that is failing?
>
>
>> Attached all the logs under /var/log/ovirt-engine , the error I've
>> mentioned above is seen in *engine.log*.
>>
>> Can someone please take a look?
>>
>
> Job link?
> Y.
>
>
>> Thanks,
>> --
>>
>> *Daniel Belenky*
>>
>> *RHV DevOps*
>>
>> *Red Hat Israel*
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system-tests 4.1 is failling due to VmDisksResource class

2017-01-12 Thread Yaniv Kaul
On Thu, Jan 12, 2017 at 10:25 AM, Daniel Belenky 
wrote:

> Hi all,
>
> ovirt-system-tests are failing with the following error:
>
> ERROR [org.ovirt.engine.api.restapi.util.LinkHelper] (default task-1) [] 
> Can't find relative path for class 
> "org.ovirt.engine.api.resource.VmDisksResource", will return null
>
>
This is a known issue ( https://bugzilla.redhat.com/show_bug.cgi?id=1410038
) and is not a cause for failure.


> The error began at 4/1.
>

This is really outdated. How did work yesterday?


> can someone take a look please?
>

What is the test that is failing?


> Attached all the logs under /var/log/ovirt-engine , the error I've
> mentioned above is seen in *engine.log*.
>
> Can someone please take a look?
>

Job link?
Y.


> Thanks,
> --
>
> *Daniel Belenky*
>
> *RHV DevOps*
>
> *Red Hat Israel*
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ovirt system tests fail during ldap tests

2017-01-10 Thread Yaniv Kaul
On Tue, Jan 10, 2017 at 12:32 PM, Denis Chaplygin 
wrote:

> Hello!
>
>
> I tried that patch, error is still here:
>
>   # add_ldap_provider:
> * Copy /tmp/dchaplyg/tmpbFDk1Q to lago-basic-suite-master-
> engine:/tmp/dchaplyg/tmpbFDk1Q:
> * Copy /tmp/dchaplyg/tmpbFDk1Q to lago-basic-suite-master-
> engine:/tmp/dchaplyg/tmpbFDk1Q: ERROR (in 0:00:00)
>

I wonder why it is not just copying it to /tmp/ - I'm sure
it's related to the fact it uses the username as well (and I doubt that
directory exists on the engine!)

I think it's partially my fault - it was an overkill ask I did once (from
Ondra) to make sure the filename is unique. We don't do it anywhere else -
we just copy it to (for example) /root/ and done with it - which
should be OK on a clean Engine host.

If I can get o-s-t running, I'll send a patch changing it to something sane.
Y.

>
>
> On Mon, Jan 9, 2017 at 2:43 PM, Eyal Edri  wrote:
>
>> We also have an open patch to add 'accept defaults' for ldap setup, it
>> might help avoid missing answer file options?
>> Roy - can you move the draft to patch mode?
>>
>> [1] https://gerrit.ovirt.org/#/c/69164/
>>
>> On Mon, Jan 9, 2017 at 3:37 PM, Ondra Machacek 
>> wrote:
>>
>>> For some reason copying of the aaa-ldap-setup answer file failed.
>>> It looks like some infra issue, is it reproducable?
>>>
>>> On Mon, Jan 9, 2017 at 11:19 AM, Denis Chaplygin 
>>> wrote:
>>> > Hi Martin,
>>> >
>>> > Sure!
>>> >
>>> > Here is console output:
>>> > @ Run test: 099_aaa-ldap.py:
>>> > nose.config: INFO: Ignoring files matching ['^\\.', '^_',
>>> '^setup\\.py$']
>>> >   # add_ldap_provider:
>>> > * Copy /tmp/dchaplyg/tmp2BINqr to
>>> > lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr:
>>> > * Copy /tmp/dchaplyg/tmp2BINqr to
>>> > lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr: ERROR (in
>>> 0:00:00)
>>> > * Collect artifacts:
>>> > * Collect artifacts: ERROR (in 0:00:01)
>>> >   # add_ldap_provider: ERROR (in 0:00:02)
>>> >   # add_ldap_user:
>>> > * Collect artifacts:
>>> > * Collect artifacts: ERROR (in 0:00:01)
>>> >   # add_ldap_user: ERROR (in 0:00:02)
>>> >   # Results located at
>>> > /home/dchaplyg/ovirt-system-tests/deployment-basic-suite-mas
>>> ter/default/nosetests-099_aaa-ldap.py.xml
>>> > @ Run test: 099_aaa-ldap.py: ERROR (in 0:00:04)
>>> > Error occured, aborting
>>> > Traceback (most recent call last):
>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 281,
>>> in
>>> > do_run
>>> > self.cli_plugins[args.ovirtverb].do_run(args)
>>> >   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
>>> 184, in
>>> > do_run
>>> > self._do_run(**vars(args))
>>> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in
>>> > wrapper
>>> > return func(*args, **kwargs)
>>> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in
>>> > wrapper
>>> > return func(*args, prefix=prefix, **kwargs)
>>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 105,
>>> in
>>> > do_ovirt_runtest
>>> > raise RuntimeError('Some tests failed')
>>> > RuntimeError: Some tests failed
>>> >
>>> > Other logs are in the attachment.
>>> >
>>> >
>>> > On Mon, Jan 9, 2017 at 10:47 AM, Martin Perina 
>>> wrote:
>>> >>
>>> >> Hi Denis,
>>> >>
>>> >> could you please share logs?
>>> >>
>>> >> Thanks
>>> >>
>>> >> Martin
>>> >>
>>> >>
>>> >> On Mon, Jan 9, 2017 at 10:36 AM, Denis Chaplygin >> >
>>> >> wrote:
>>> >>>
>>> >>> Hello!
>>> >>>
>>> >>> I tried to play with system tests and discovered that they some
>>> suits are
>>> >>> always failing at my side and that fail seems to be related to test
>>> >>> preparation procedure:
>>> >>>
>>> >>>   # add_ldap_provider:
>>> >>> * Copy /tmp/dchaplyg/tmpVIWROd to
>>> >>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
>>> >>> * Copy /tmp/dchaplyg/tmpVIWROd to
>>> >>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd: ERROR (in
>>> 0:00:00)
>>> >>>
>>> >>>
>>> >>> I got that error with basic-suite-4.0 and basic-suite-master
>>> >>>
>>> >>> What could be wrong?
>>> >>>
>>> >>> ___
>>> >>> Devel mailing list
>>> >>> Devel@ovirt.org
>>> >>> http://lists.ovirt.org/mailman/listinfo/devel
>>> >>
>>> >>
>>> >
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org

Re: [ovirt-devel] Ovirt system tests fail during ldap tests

2017-01-10 Thread Denis Chaplygin
Hello!


I tried that patch, error is still here:

  # add_ldap_provider:
* Copy /tmp/dchaplyg/tmpbFDk1Q to
lago-basic-suite-master-engine:/tmp/dchaplyg/tmpbFDk1Q:
* Copy /tmp/dchaplyg/tmpbFDk1Q to
lago-basic-suite-master-engine:/tmp/dchaplyg/tmpbFDk1Q: ERROR (in 0:00:00)


On Mon, Jan 9, 2017 at 2:43 PM, Eyal Edri  wrote:

> We also have an open patch to add 'accept defaults' for ldap setup, it
> might help avoid missing answer file options?
> Roy - can you move the draft to patch mode?
>
> [1] https://gerrit.ovirt.org/#/c/69164/
>
> On Mon, Jan 9, 2017 at 3:37 PM, Ondra Machacek 
> wrote:
>
>> For some reason copying of the aaa-ldap-setup answer file failed.
>> It looks like some infra issue, is it reproducable?
>>
>> On Mon, Jan 9, 2017 at 11:19 AM, Denis Chaplygin 
>> wrote:
>> > Hi Martin,
>> >
>> > Sure!
>> >
>> > Here is console output:
>> > @ Run test: 099_aaa-ldap.py:
>> > nose.config: INFO: Ignoring files matching ['^\\.', '^_',
>> '^setup\\.py$']
>> >   # add_ldap_provider:
>> > * Copy /tmp/dchaplyg/tmp2BINqr to
>> > lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr:
>> > * Copy /tmp/dchaplyg/tmp2BINqr to
>> > lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr: ERROR (in
>> 0:00:00)
>> > * Collect artifacts:
>> > * Collect artifacts: ERROR (in 0:00:01)
>> >   # add_ldap_provider: ERROR (in 0:00:02)
>> >   # add_ldap_user:
>> > * Collect artifacts:
>> > * Collect artifacts: ERROR (in 0:00:01)
>> >   # add_ldap_user: ERROR (in 0:00:02)
>> >   # Results located at
>> > /home/dchaplyg/ovirt-system-tests/deployment-basic-suite-mas
>> ter/default/nosetests-099_aaa-ldap.py.xml
>> > @ Run test: 099_aaa-ldap.py: ERROR (in 0:00:04)
>> > Error occured, aborting
>> > Traceback (most recent call last):
>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 281,
>> in
>> > do_run
>> > self.cli_plugins[args.ovirtverb].do_run(args)
>> >   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
>> 184, in
>> > do_run
>> > self._do_run(**vars(args))
>> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in
>> > wrapper
>> > return func(*args, **kwargs)
>> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in
>> > wrapper
>> > return func(*args, prefix=prefix, **kwargs)
>> >   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 105,
>> in
>> > do_ovirt_runtest
>> > raise RuntimeError('Some tests failed')
>> > RuntimeError: Some tests failed
>> >
>> > Other logs are in the attachment.
>> >
>> >
>> > On Mon, Jan 9, 2017 at 10:47 AM, Martin Perina 
>> wrote:
>> >>
>> >> Hi Denis,
>> >>
>> >> could you please share logs?
>> >>
>> >> Thanks
>> >>
>> >> Martin
>> >>
>> >>
>> >> On Mon, Jan 9, 2017 at 10:36 AM, Denis Chaplygin 
>> >> wrote:
>> >>>
>> >>> Hello!
>> >>>
>> >>> I tried to play with system tests and discovered that they some suits
>> are
>> >>> always failing at my side and that fail seems to be related to test
>> >>> preparation procedure:
>> >>>
>> >>>   # add_ldap_provider:
>> >>> * Copy /tmp/dchaplyg/tmpVIWROd to
>> >>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
>> >>> * Copy /tmp/dchaplyg/tmpVIWROd to
>> >>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd: ERROR (in
>> 0:00:00)
>> >>>
>> >>>
>> >>> I got that error with basic-suite-4.0 and basic-suite-master
>> >>>
>> >>> What could be wrong?
>> >>>
>> >>> ___
>> >>> Devel mailing list
>> >>> Devel@ovirt.org
>> >>> http://lists.ovirt.org/mailman/listinfo/devel
>> >>
>> >>
>> >
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ovirt system tests fail during ldap tests

2017-01-09 Thread Eyal Edri
We also have an open patch to add 'accept defaults' for ldap setup, it
might help avoid missing answer file options?
Roy - can you move the draft to patch mode?

[1] https://gerrit.ovirt.org/#/c/69164/

On Mon, Jan 9, 2017 at 3:37 PM, Ondra Machacek  wrote:

> For some reason copying of the aaa-ldap-setup answer file failed.
> It looks like some infra issue, is it reproducable?
>
> On Mon, Jan 9, 2017 at 11:19 AM, Denis Chaplygin 
> wrote:
> > Hi Martin,
> >
> > Sure!
> >
> > Here is console output:
> > @ Run test: 099_aaa-ldap.py:
> > nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
> >   # add_ldap_provider:
> > * Copy /tmp/dchaplyg/tmp2BINqr to
> > lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr:
> > * Copy /tmp/dchaplyg/tmp2BINqr to
> > lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr: ERROR (in
> 0:00:00)
> > * Collect artifacts:
> > * Collect artifacts: ERROR (in 0:00:01)
> >   # add_ldap_provider: ERROR (in 0:00:02)
> >   # add_ldap_user:
> > * Collect artifacts:
> > * Collect artifacts: ERROR (in 0:00:01)
> >   # add_ldap_user: ERROR (in 0:00:02)
> >   # Results located at
> > /home/dchaplyg/ovirt-system-tests/deployment-basic-suite-
> master/default/nosetests-099_aaa-ldap.py.xml
> > @ Run test: 099_aaa-ldap.py: ERROR (in 0:00:04)
> > Error occured, aborting
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 281, in
> > do_run
> > self.cli_plugins[args.ovirtverb].do_run(args)
> >   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line
> 184, in
> > do_run
> > self._do_run(**vars(args))
> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in
> > wrapper
> > return func(*args, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in
> > wrapper
> > return func(*args, prefix=prefix, **kwargs)
> >   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 105, in
> > do_ovirt_runtest
> > raise RuntimeError('Some tests failed')
> > RuntimeError: Some tests failed
> >
> > Other logs are in the attachment.
> >
> >
> > On Mon, Jan 9, 2017 at 10:47 AM, Martin Perina 
> wrote:
> >>
> >> Hi Denis,
> >>
> >> could you please share logs?
> >>
> >> Thanks
> >>
> >> Martin
> >>
> >>
> >> On Mon, Jan 9, 2017 at 10:36 AM, Denis Chaplygin 
> >> wrote:
> >>>
> >>> Hello!
> >>>
> >>> I tried to play with system tests and discovered that they some suits
> are
> >>> always failing at my side and that fail seems to be related to test
> >>> preparation procedure:
> >>>
> >>>   # add_ldap_provider:
> >>> * Copy /tmp/dchaplyg/tmpVIWROd to
> >>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
> >>> * Copy /tmp/dchaplyg/tmpVIWROd to
> >>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd: ERROR (in
> 0:00:00)
> >>>
> >>>
> >>> I got that error with basic-suite-4.0 and basic-suite-master
> >>>
> >>> What could be wrong?
> >>>
> >>> ___
> >>> Devel mailing list
> >>> Devel@ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/devel
> >>
> >>
> >
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] Ovirt system tests fail during ldap tests

2017-01-09 Thread Ondra Machacek
For some reason copying of the aaa-ldap-setup answer file failed.
It looks like some infra issue, is it reproducable?

On Mon, Jan 9, 2017 at 11:19 AM, Denis Chaplygin  wrote:
> Hi Martin,
>
> Sure!
>
> Here is console output:
> @ Run test: 099_aaa-ldap.py:
> nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$']
>   # add_ldap_provider:
> * Copy /tmp/dchaplyg/tmp2BINqr to
> lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr:
> * Copy /tmp/dchaplyg/tmp2BINqr to
> lago-basic-suite-master-engine:/tmp/dchaplyg/tmp2BINqr: ERROR (in 0:00:00)
> * Collect artifacts:
> * Collect artifacts: ERROR (in 0:00:01)
>   # add_ldap_provider: ERROR (in 0:00:02)
>   # add_ldap_user:
> * Collect artifacts:
> * Collect artifacts: ERROR (in 0:00:01)
>   # add_ldap_user: ERROR (in 0:00:02)
>   # Results located at
> /home/dchaplyg/ovirt-system-tests/deployment-basic-suite-master/default/nosetests-099_aaa-ldap.py.xml
> @ Run test: 099_aaa-ldap.py: ERROR (in 0:00:04)
> Error occured, aborting
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 281, in
> do_run
> self.cli_plugins[args.ovirtverb].do_run(args)
>   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in
> do_run
> self._do_run(**vars(args))
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in
> wrapper
> return func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in
> wrapper
> return func(*args, prefix=prefix, **kwargs)
>   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 105, in
> do_ovirt_runtest
> raise RuntimeError('Some tests failed')
> RuntimeError: Some tests failed
>
> Other logs are in the attachment.
>
>
> On Mon, Jan 9, 2017 at 10:47 AM, Martin Perina  wrote:
>>
>> Hi Denis,
>>
>> could you please share logs?
>>
>> Thanks
>>
>> Martin
>>
>>
>> On Mon, Jan 9, 2017 at 10:36 AM, Denis Chaplygin 
>> wrote:
>>>
>>> Hello!
>>>
>>> I tried to play with system tests and discovered that they some suits are
>>> always failing at my side and that fail seems to be related to test
>>> preparation procedure:
>>>
>>>   # add_ldap_provider:
>>> * Copy /tmp/dchaplyg/tmpVIWROd to
>>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
>>> * Copy /tmp/dchaplyg/tmpVIWROd to
>>> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd: ERROR (in 0:00:00)
>>>
>>>
>>> I got that error with basic-suite-4.0 and basic-suite-master
>>>
>>> What could be wrong?
>>>
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] Ovirt system tests fail during ldap tests

2017-01-09 Thread Martin Perina
Hi Denis,

could you please share logs?

Thanks

Martin


On Mon, Jan 9, 2017 at 10:36 AM, Denis Chaplygin 
wrote:

> Hello!
>
> I tried to play with system tests and discovered that they some suits are
> always failing at my side and that fail seems to be related to test
> preparation procedure:
>
>   # add_ldap_provider:
> * Copy /tmp/dchaplyg/tmpVIWROd to 
> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
>
> * Copy /tmp/dchaplyg/tmpVIWROd to 
> lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
> ERROR (in 0:00:00)
>
>
> I got that error with basic-suite-4.0 and basic-suite-master
>
> What could be wrong?
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Ovirt system tests fail during ldap tests

2017-01-09 Thread Denis Chaplygin
Hello!

I tried to play with system tests and discovered that they some suits are
always failing at my side and that fail seems to be related to test
preparation procedure:

  # add_ldap_provider:
* Copy /tmp/dchaplyg/tmpVIWROd to
lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:

* Copy /tmp/dchaplyg/tmpVIWROd to
lago-basic-suite-master-engine:/tmp/dchaplyg/tmpVIWROd:
ERROR (in 0:00:00)


I got that error with basic-suite-4.0 and basic-suite-master

What could be wrong?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Dan Kenigsberg
On Sun, Dec 25, 2016 at 4:53 PM, Eyal Edri  wrote:
> Why do we need to wait for a review and not revert the offending patch in
> the meantime?

I want to verify that a revert helps. but I'm getting

14:41:29  > git -c core.askpass=true fetch --tags --progress
git://gerrit.ovirt.org/vdsm.git refs/changes/93/69093/1  --prune
14:41:29 ERROR: Error fetching remote repo 'origin'
14:41:29 hudson.plugins.git.GitException: Failed to fetch from
git://gerrit.ovirt.org/vdsm.git

review of a forward-looking fix can be done in parallel.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Eyal Edri
Why do we need to wait for a review and not revert the offending patch in
the meantime?

On Sun, Dec 25, 2016 at 4:26 PM, Dan Kenigsberg  wrote:

> On Sun, Dec 25, 2016 at 3:59 PM, Dan Kenigsberg  wrote:
> > On Sun, Dec 25, 2016 at 3:40 PM, Barak Korren 
> wrote:
> >>>
> >>> Can you tell if /var/run/vdsm/svdsm.sock exists?
> >>
> >> # ls -l /var/run/vdsm/svdsm.sock
> >> ls: cannot access /var/run/vdsm/svdsm.sock: No such file or directory
> >>
> >>> What's `ls -lZ /var/run/vdsm` ?
> >>
> >> # ls -lZ /var/run/vdsm
> >> ls: cannot access /var/run/vdsm: No such file or directory
> >>
> >>> Can you manually run supervdsmServer --sockfile=/var/run/vdsm/
> svdsm.sock
> >>> on that host?
> >>
> >> I get this:
> >>
> >> [2016-12-25 08:39:01,224 pyinotify ERROR] add_watch: cannot watch
> >> /var/run/vdsm/sourceRoutes WD=-1, Errno=Not a directory (ENOTDIR)
> >> Exception in thread sourceRoute:
> >> Traceback (most recent call last):
> >>   File "/usr/lib64/python2.7/threading.py", line 811, in
> __bootstrap_inner
> >> self.run()
> >>   File "/usr/lib64/python2.7/threading.py", line 764, in run
> >> self.__target(*self.__args, **self.__kwargs)
> >>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in
> wrapper
> >> return f(*a, **kw)
> >>   File "/usr/lib/python2.7/site-packages/vdsm/network/
> sourceroutethread.py",
> >> line 90, in _subscribeToInotifyLoop
> >> for filePath in sorted(os.listdir(SOURCE_ROUTES_FOLDER)):
> >> OSError: [Errno 20] Not a directory: '/var/run/vdsm/sourceRoutes'
> >
> > Now that's completely my fault.
> >
> > I probably did not verify https://gerrit.ovirt.org/#/c/68662/ on a
> > clean installation.
> >
> > Barak, do you know if there's a standard way of triggering
> > systemd-tmpfiles in %post ?
> >
> > https://gerrit.ovirt.org/#/c/69092/ would revert my offending patch.
>
> But https://gerrit.ovirt.org/#/c/69093/1/vdsm.spec.in may be a proper
> fix. Would you care to review?
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>


-- 
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Dan Kenigsberg
On Sun, Dec 25, 2016 at 3:59 PM, Dan Kenigsberg  wrote:
> On Sun, Dec 25, 2016 at 3:40 PM, Barak Korren  wrote:
>>>
>>> Can you tell if /var/run/vdsm/svdsm.sock exists?
>>
>> # ls -l /var/run/vdsm/svdsm.sock
>> ls: cannot access /var/run/vdsm/svdsm.sock: No such file or directory
>>
>>> What's `ls -lZ /var/run/vdsm` ?
>>
>> # ls -lZ /var/run/vdsm
>> ls: cannot access /var/run/vdsm: No such file or directory
>>
>>> Can you manually run supervdsmServer --sockfile=/var/run/vdsm/svdsm.sock
>>> on that host?
>>
>> I get this:
>>
>> [2016-12-25 08:39:01,224 pyinotify ERROR] add_watch: cannot watch
>> /var/run/vdsm/sourceRoutes WD=-1, Errno=Not a directory (ENOTDIR)
>> Exception in thread sourceRoute:
>> Traceback (most recent call last):
>>   File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
>> self.run()
>>   File "/usr/lib64/python2.7/threading.py", line 764, in run
>> self.__target(*self.__args, **self.__kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in wrapper
>> return f(*a, **kw)
>>   File "/usr/lib/python2.7/site-packages/vdsm/network/sourceroutethread.py",
>> line 90, in _subscribeToInotifyLoop
>> for filePath in sorted(os.listdir(SOURCE_ROUTES_FOLDER)):
>> OSError: [Errno 20] Not a directory: '/var/run/vdsm/sourceRoutes'
>
> Now that's completely my fault.
>
> I probably did not verify https://gerrit.ovirt.org/#/c/68662/ on a
> clean installation.
>
> Barak, do you know if there's a standard way of triggering
> systemd-tmpfiles in %post ?
>
> https://gerrit.ovirt.org/#/c/69092/ would revert my offending patch.

But https://gerrit.ovirt.org/#/c/69093/1/vdsm.spec.in may be a proper
fix. Would you care to review?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Dan Kenigsberg
On Sun, Dec 25, 2016 at 3:40 PM, Barak Korren  wrote:
>>
>> Can you tell if /var/run/vdsm/svdsm.sock exists?
>
> # ls -l /var/run/vdsm/svdsm.sock
> ls: cannot access /var/run/vdsm/svdsm.sock: No such file or directory
>
>> What's `ls -lZ /var/run/vdsm` ?
>
> # ls -lZ /var/run/vdsm
> ls: cannot access /var/run/vdsm: No such file or directory
>
>> Can you manually run supervdsmServer --sockfile=/var/run/vdsm/svdsm.sock
>> on that host?
>
> I get this:
>
> [2016-12-25 08:39:01,224 pyinotify ERROR] add_watch: cannot watch
> /var/run/vdsm/sourceRoutes WD=-1, Errno=Not a directory (ENOTDIR)
> Exception in thread sourceRoute:
> Traceback (most recent call last):
>   File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
> self.run()
>   File "/usr/lib64/python2.7/threading.py", line 764, in run
> self.__target(*self.__args, **self.__kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in wrapper
> return f(*a, **kw)
>   File "/usr/lib/python2.7/site-packages/vdsm/network/sourceroutethread.py",
> line 90, in _subscribeToInotifyLoop
> for filePath in sorted(os.listdir(SOURCE_ROUTES_FOLDER)):
> OSError: [Errno 20] Not a directory: '/var/run/vdsm/sourceRoutes'

Now that's completely my fault.

I probably did not verify https://gerrit.ovirt.org/#/c/68662/ on a
clean installation.

Barak, do you know if there's a standard way of triggering
systemd-tmpfiles in %post ?

https://gerrit.ovirt.org/#/c/69092/ would revert my offending patch.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Barak Korren
>
> Can you tell if /var/run/vdsm/svdsm.sock exists?

# ls -l /var/run/vdsm/svdsm.sock
ls: cannot access /var/run/vdsm/svdsm.sock: No such file or directory

> What's `ls -lZ /var/run/vdsm` ?

# ls -lZ /var/run/vdsm
ls: cannot access /var/run/vdsm: No such file or directory

> Can you manually run supervdsmServer --sockfile=/var/run/vdsm/svdsm.sock
> on that host?

I get this:

[2016-12-25 08:39:01,224 pyinotify ERROR] add_watch: cannot watch
/var/run/vdsm/sourceRoutes WD=-1, Errno=Not a directory (ENOTDIR)
Exception in thread sourceRoute:
Traceback (most recent call last):
  File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
self.run()
  File "/usr/lib64/python2.7/threading.py", line 764, in run
self.__target(*self.__args, **self.__kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 368, in wrapper
return f(*a, **kw)
  File "/usr/lib/python2.7/site-packages/vdsm/network/sourceroutethread.py",
line 90, in _subscribeToInotifyLoop
for filePath in sorted(os.listdir(SOURCE_ROUTES_FOLDER)):
OSError: [Errno 20] Not a directory: '/var/run/vdsm/sourceRoutes'



-- 
Barak Korren
bkor...@redhat.com
RHCE, RHCi, RHV-DevOps Team
https://ifireball.wordpress.com/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Dan Kenigsberg
On Sun, Dec 25, 2016 at 3:01 PM, Barak Korren  wrote:
> On 23 December 2016 at 21:43, Nir Soffer  wrote:
>>
>> The fix seems to work, do you want to review or test it?
>> https://gerrit.ovirt.org/69052
>>
>
>
> Fix merged, and looking at hosts it seems to do what it was supposed to:
>
> # ls -ld /var/log/ovirt-imageio-daemon/
> drwxr-xr-x. 2 vdsm kvm 4096 Dec 25 07:45 /var/log/ovirt-imageio-daemon/
>
>
> But now we have a new failure, SuperVDSM not starting with this in the logs:
>
> MainThread::DEBUG::2016-12-25
> 07:45:36,018::supervdsmServer::275::SuperVdsm.Server::(main) Making
> sure I'm root - SuperVdsm
> MainThread::DEBUG::2016-12-25
> 07:45:36,018::supervdsmServer::284::SuperVdsm.Server::(main) Parsing
> cmd args
> MainThread::DEBUG::2016-12-25
> 07:45:36,018::supervdsmServer::287::SuperVdsm.Server::(main) Cleaning
> old socket /var/run/vdsm/svdsm.sock
> MainThread::DEBUG::2016-12-25
> 07:45:36,018::supervdsmServer::291::SuperVdsm.Server::(main) Setting
> up keep alive thread
> MainThread::DEBUG::2016-12-25
> 07:45:36,018::supervdsmServer::297::SuperVdsm.Server::(main) Creating
> remote object manager
> MainThread::ERROR::2016-12-25
> 07:45:36,018::supervdsmServer::321::SuperVdsm.Server::(main) Could not
> start Super Vdsm
> Traceback (most recent call last):
>   File "/usr/share/vdsm/supervdsmServer", line 301, in main
> server = anager.get_server()
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 493,
> in get_server
> self._authkey, self._serializer)
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 162, in 
> __init__
> self.listener = Listener(address=address, backlog=16)
>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 136,
> in __init__
> self._listener = SocketListener(address, family, backlog)
>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 260,
> in __init__
> self._socket.bind(address)
>   File "/usr/lib64/python2.7/socket.py", line 224, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 2] No such file or directory
> MainThread::DEBUG::2016-12-25
> 07:45:36,338::supervdsmServer::275::SuperVdsm.Server::(main) Making
> sure I'm root - SuperVdsm

Can you tell if /var/run/vdsm/svdsm.sock exists?

What's `ls -lZ /var/run/vdsm` ?

Can you manually run supervdsmServer --sockfile=/var/run/vdsm/svdsm.sock
on that host?
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-25 Thread Barak Korren
On 23 December 2016 at 21:43, Nir Soffer  wrote:
>
> The fix seems to work, do you want to review or test it?
> https://gerrit.ovirt.org/69052
>


Fix merged, and looking at hosts it seems to do what it was supposed to:

# ls -ld /var/log/ovirt-imageio-daemon/
drwxr-xr-x. 2 vdsm kvm 4096 Dec 25 07:45 /var/log/ovirt-imageio-daemon/


But now we have a new failure, SuperVDSM not starting with this in the logs:

MainThread::DEBUG::2016-12-25
07:45:36,018::supervdsmServer::275::SuperVdsm.Server::(main) Making
sure I'm root - SuperVdsm
MainThread::DEBUG::2016-12-25
07:45:36,018::supervdsmServer::284::SuperVdsm.Server::(main) Parsing
cmd args
MainThread::DEBUG::2016-12-25
07:45:36,018::supervdsmServer::287::SuperVdsm.Server::(main) Cleaning
old socket /var/run/vdsm/svdsm.sock
MainThread::DEBUG::2016-12-25
07:45:36,018::supervdsmServer::291::SuperVdsm.Server::(main) Setting
up keep alive thread
MainThread::DEBUG::2016-12-25
07:45:36,018::supervdsmServer::297::SuperVdsm.Server::(main) Creating
remote object manager
MainThread::ERROR::2016-12-25
07:45:36,018::supervdsmServer::321::SuperVdsm.Server::(main) Could not
start Super Vdsm
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 301, in main
server = anager.get_server()
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 493,
in get_server
self._authkey, self._serializer)
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 162, in __init__
self.listener = Listener(address=address, backlog=16)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line 136,
in __init__
self._listener = SocketListener(address, family, backlog)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line 260,
in __init__
self._socket.bind(address)
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory
MainThread::DEBUG::2016-12-25
07:45:36,338::supervdsmServer::275::SuperVdsm.Server::(main) Making
sure I'm root - SuperVdsm


... Repeat ...



-- 
Barak Korren
bkor...@redhat.com
RHCE, RHCi, RHV-DevOps Team
https://ifireball.wordpress.com/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-23 Thread Nir Soffer
On Fri, Dec 23, 2016 at 9:32 PM, Nir Soffer  wrote:
> On Fri, Dec 23, 2016 at 8:59 PM, Barak Korren  wrote:
>> On 23 December 2016 at 20:25, Nir Soffer  wrote:
>>>
>>> This smells like https://bugzilla.redhat.com/1401901
>>>
>>> Can you share the output of:
>>>
>>> ls -ld /var/log/ovirt-imageio-daemon
>>
>> drwxr-xr-x. 2 root root 4096 Dec 21 08:45 /var/log/ovirt-imageio-daemon
>>
>>> ls -l /var/log/ovirt-imageio-daemon/
>>
>> total 0
>
> Thanks, so this is 1401901.
>
> I'm testing a fix.

The fix seems to work, do you want to review or test it?
https://gerrit.ovirt.org/69052

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-23 Thread Nir Soffer
On Fri, Dec 23, 2016 at 8:59 PM, Barak Korren  wrote:
> On 23 December 2016 at 20:25, Nir Soffer  wrote:
>>
>> This smells like https://bugzilla.redhat.com/1401901
>>
>> Can you share the output of:
>>
>> ls -ld /var/log/ovirt-imageio-daemon
>
> drwxr-xr-x. 2 root root 4096 Dec 21 08:45 /var/log/ovirt-imageio-daemon
>
>> ls -l /var/log/ovirt-imageio-daemon/
>
> total 0

Thanks, so this is 1401901.

I'm testing a fix.

Nir
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-23 Thread Barak Korren
On 23 December 2016 at 20:25, Nir Soffer  wrote:
>
> This smells like https://bugzilla.redhat.com/1401901
>
> Can you share the output of:
>
> ls -ld /var/log/ovirt-imageio-daemon

drwxr-xr-x. 2 root root 4096 Dec 21 08:45 /var/log/ovirt-imageio-daemon

> ls -l /var/log/ovirt-imageio-daemon/

total 0


HTH,

-- 
Barak Korren
bkor...@redhat.com
RHCE, RHCi, RHV-DevOps Team
https://ifireball.wordpress.com/
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-23 Thread Nir Soffer
On Fri, Dec 23, 2016 at 6:20 PM, Barak Korren  wrote:
> On 22 December 2016 at 21:56, Nir Soffer  wrote:
>> On Thu, Dec 22, 2016 at 9:12 PM, Fred Rolland  wrote:
>>> SuperVdsm fails to starts :
>>>
>>> MainThread::ERROR::2016-12-22
>>> 12:42:08,699::supervdsmServer::317::SuperVdsm.Server::(main) Could not start
>>> Super Vdsm
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/supervdsmServer", line 297, in main
>>> server = manager.get_server()
>>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 493, in
>>> get_server
>>> self._authkey, self._serializer)
>>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 162, in
>>> __init__
>>> self.listener = Listener(address=address, backlog=16)
>>>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 136, in
>>> __init__
>>> self._listener = SocketListener(address, family, backlog)
>>>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 260, in
>>> __init__
>>> self._socket.bind(address)
>>>   File "/usr/lib64/python2.7/socket.py", line 224, in meth
>>> return getattr(self._sock,name)(*args)
>>> error: [Errno 2] No such file or directory
>>>
>>>
>>> On Thu, Dec 22, 2016 at 7:54 PM, Barak Korren  wrote:

 It hard to tell currently when did this start b/c we had so package
 issues that made the tests fail before reaching that point most of the
 day.

 Since we currently have an issue in Lago with collecting AddHost logs
 (Hopefully we'll resolve this in the next release early next week),
 I`ve ran the tests locally and attached the bundle of generated logs
 to this message.

 Included in the attached file are engine logs, host-deploy logs and
 VDSM logs for both test hosts.

 From a quick look inside it seems the issue is with VDSM failing to start.
>>
>> From host-deploy/ovirt-host-deploy-20161222124209-192.168.203.4-604a4799.log:
>>
>> 2016-12-22 12:42:05 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.executeRaw:813 execute: ('/bin/systemctl', 'start',
>> 'vdsmd.service'), executable='None', cwd='None', env=None
>> 2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'start',
>> 'vdsmd.service'), rc=1
>> 2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stdout:
>>
>>
>> 2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stderr:
>> A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
>>
>> This means that one of the services vdsm depends on could not start.
>>
>> 2016-12-22 12:42:09 DEBUG otopi.context context._executeMethod:142
>> method exception
>> Traceback (most recent call last):
>>   File "/tmp/ovirt-bUCuRxXXzU/pythonlib/otopi/context.py", line 132,
>> in _executeMethod
>> method['method']()
>>   File 
>> "/tmp/ovirt-bUCuRxXXzU/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
>> line 209, in _start
>> self.services.state('vdsmd', True)
>>   File "/tmp/ovirt-bUCuRxXXzU/otopi-plugins/otopi/services/systemd.py",
>> line 141, in state
>> service=name,
>> RuntimeError: Failed to start service 'vdsmd'
>>
>> This error is not very useful for anyone. What we need in otopi log is
>> the output of
>> journalctl -xe (suggested by systemctl).
>>
>> Didi, can we collect this info when starting a service fail?
>>
>> Barak, can you log in to the host with this error and collect the output?
>>
> By the time I looged in to the host, all IP addresses are gone (I'm
> guessing the setup process killed dhclient), so I'm having to work via
> the serial console)
>
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 54:52:c0:a8:cb:02 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::5652:c0ff:fea8:cb02/64 scope link
>valid_lft forever preferred_lft forever
> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 54:52:c0:a8:cc:02 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::5652:c0ff:fea8:cc02/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 54:52:c0:a8:cc:03 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::5652:c0ff:fea8:cc03/64 scope link
>valid_lft forever preferred_lft forever
> 5: eth3:  mtu 1500 

Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-23 Thread Dan Kenigsberg
On Fri, Dec 23, 2016 at 6:20 PM, Barak Korren  wrote:
> On 22 December 2016 at 21:56, Nir Soffer  wrote:
>> On Thu, Dec 22, 2016 at 9:12 PM, Fred Rolland  wrote:
>>> SuperVdsm fails to starts :
>>>
>>> MainThread::ERROR::2016-12-22
>>> 12:42:08,699::supervdsmServer::317::SuperVdsm.Server::(main) Could not start
>>> Super Vdsm
>>> Traceback (most recent call last):
>>>   File "/usr/share/vdsm/supervdsmServer", line 297, in main
>>> server = manager.get_server()
>>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 493, in
>>> get_server
>>> self._authkey, self._serializer)
>>>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 162, in
>>> __init__
>>> self.listener = Listener(address=address, backlog=16)
>>>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 136, in
>>> __init__
>>> self._listener = SocketListener(address, family, backlog)
>>>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 260, in
>>> __init__
>>> self._socket.bind(address)
>>>   File "/usr/lib64/python2.7/socket.py", line 224, in meth
>>> return getattr(self._sock,name)(*args)
>>> error: [Errno 2] No such file or directory
>>>
>>>
>>> On Thu, Dec 22, 2016 at 7:54 PM, Barak Korren  wrote:

 It hard to tell currently when did this start b/c we had so package
 issues that made the tests fail before reaching that point most of the
 day.

 Since we currently have an issue in Lago with collecting AddHost logs
 (Hopefully we'll resolve this in the next release early next week),
 I`ve ran the tests locally and attached the bundle of generated logs
 to this message.

 Included in the attached file are engine logs, host-deploy logs and
 VDSM logs for both test hosts.

 From a quick look inside it seems the issue is with VDSM failing to start.
>>
>> From host-deploy/ovirt-host-deploy-20161222124209-192.168.203.4-604a4799.log:
>>
>> 2016-12-22 12:42:05 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.executeRaw:813 execute: ('/bin/systemctl', 'start',
>> 'vdsmd.service'), executable='None', cwd='None', env=None
>> 2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'start',
>> 'vdsmd.service'), rc=1
>> 2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stdout:
>>
>>
>> 2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
>> plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
>> 'vdsmd.service') stderr:
>> A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
>>
>> This means that one of the services vdsm depends on could not start.
>>
>> 2016-12-22 12:42:09 DEBUG otopi.context context._executeMethod:142
>> method exception
>> Traceback (most recent call last):
>>   File "/tmp/ovirt-bUCuRxXXzU/pythonlib/otopi/context.py", line 132,
>> in _executeMethod
>> method['method']()
>>   File 
>> "/tmp/ovirt-bUCuRxXXzU/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
>> line 209, in _start
>> self.services.state('vdsmd', True)
>>   File "/tmp/ovirt-bUCuRxXXzU/otopi-plugins/otopi/services/systemd.py",
>> line 141, in state
>> service=name,
>> RuntimeError: Failed to start service 'vdsmd'
>>
>> This error is not very useful for anyone. What we need in otopi log is
>> the output of
>> journalctl -xe (suggested by systemctl).
>>
>> Didi, can we collect this info when starting a service fail?
>>
>> Barak, can you log in to the host with this error and collect the output?
>>
> By the time I looged in to the host, all IP addresses are gone (I'm
> guessing the setup process killed dhclient), so I'm having to work via
> the serial console)
>
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 54:52:c0:a8:cb:02 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::5652:c0ff:fea8:cb02/64 scope link
>valid_lft forever preferred_lft forever
> 3: eth1:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 54:52:c0:a8:cc:02 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::5652:c0ff:fea8:cc02/64 scope link
>valid_lft forever preferred_lft forever
> 4: eth2:  mtu 1500 qdisc pfifo_fast
> state UP qlen 1000
> link/ether 54:52:c0:a8:cc:03 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::5652:c0ff:fea8:cc03/64 scope link
>valid_lft forever preferred_lft forever
> 5: eth3:  mtu 1500 

Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-22 Thread Nir Soffer
On Thu, Dec 22, 2016 at 9:12 PM, Fred Rolland  wrote:
> SuperVdsm fails to starts :
>
> MainThread::ERROR::2016-12-22
> 12:42:08,699::supervdsmServer::317::SuperVdsm.Server::(main) Could not start
> Super Vdsm
> Traceback (most recent call last):
>   File "/usr/share/vdsm/supervdsmServer", line 297, in main
> server = manager.get_server()
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 493, in
> get_server
> self._authkey, self._serializer)
>   File "/usr/lib64/python2.7/multiprocessing/managers.py", line 162, in
> __init__
> self.listener = Listener(address=address, backlog=16)
>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 136, in
> __init__
> self._listener = SocketListener(address, family, backlog)
>   File "/usr/lib64/python2.7/multiprocessing/connection.py", line 260, in
> __init__
> self._socket.bind(address)
>   File "/usr/lib64/python2.7/socket.py", line 224, in meth
> return getattr(self._sock,name)(*args)
> error: [Errno 2] No such file or directory
>
>
> On Thu, Dec 22, 2016 at 7:54 PM, Barak Korren  wrote:
>>
>> It hard to tell currently when did this start b/c we had so package
>> issues that made the tests fail before reaching that point most of the
>> day.
>>
>> Since we currently have an issue in Lago with collecting AddHost logs
>> (Hopefully we'll resolve this in the next release early next week),
>> I`ve ran the tests locally and attached the bundle of generated logs
>> to this message.
>>
>> Included in the attached file are engine logs, host-deploy logs and
>> VDSM logs for both test hosts.
>>
>> From a quick look inside it seems the issue is with VDSM failing to start.

>From host-deploy/ovirt-host-deploy-20161222124209-192.168.203.4-604a4799.log:

2016-12-22 12:42:05 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:813 execute: ('/bin/systemctl', 'start',
'vdsmd.service'), executable='None', cwd='None', env=None
2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'start',
'vdsmd.service'), rc=1
2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:921 execute-output: ('/bin/systemctl', 'start',
'vdsmd.service') stdout:


2016-12-22 12:42:09 DEBUG otopi.plugins.otopi.services.systemd
plugin.execute:926 execute-output: ('/bin/systemctl', 'start',
'vdsmd.service') stderr:
A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.

This means that one of the services vdsm depends on could not start.

2016-12-22 12:42:09 DEBUG otopi.context context._executeMethod:142
method exception
Traceback (most recent call last):
  File "/tmp/ovirt-bUCuRxXXzU/pythonlib/otopi/context.py", line 132,
in _executeMethod
method['method']()
  File "/tmp/ovirt-bUCuRxXXzU/otopi-plugins/ovirt-host-deploy/vdsm/packages.py",
line 209, in _start
self.services.state('vdsmd', True)
  File "/tmp/ovirt-bUCuRxXXzU/otopi-plugins/otopi/services/systemd.py",
line 141, in state
service=name,
RuntimeError: Failed to start service 'vdsmd'

This error is not very useful for anyone. What we need in otopi log is
the output of
journalctl -xe (suggested by systemctl).

Didi, can we collect this info when starting a service fail?

Barak, can you log in to the host with this error and collect the output?

>>
>> Thanks,
>>
>> --
>> Barak Korren
>> bkor...@redhat.com
>> RHCE, RHCi, RHV-DevOps Team
>> https://ifireball.wordpress.com/
>>
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] oVirt system tests currently failing to AddHost on master

2016-12-22 Thread Fred Rolland
SuperVdsm fails to starts :

MainThread::ERROR::2016-12-22
12:42:08,699::supervdsmServer::317::SuperVdsm.Server::(main) Could not
start Super Vdsm
Traceback (most recent call last):
  File "/usr/share/vdsm/supervdsmServer", line 297, in main
server = manager.get_server()
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 493, in
get_server
self._authkey, self._serializer)
  File "/usr/lib64/python2.7/multiprocessing/managers.py", line 162, in
__init__
self.listener = Listener(address=address, backlog=16)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line 136, in
__init__
self._listener = SocketListener(address, family, backlog)
  File "/usr/lib64/python2.7/multiprocessing/connection.py", line 260, in
__init__
self._socket.bind(address)
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 2] No such file or directory


On Thu, Dec 22, 2016 at 7:54 PM, Barak Korren  wrote:

> It hard to tell currently when did this start b/c we had so package
> issues that made the tests fail before reaching that point most of the
> day.
>
> Since we currently have an issue in Lago with collecting AddHost logs
> (Hopefully we'll resolve this in the next release early next week),
> I`ve ran the tests locally and attached the bundle of generated logs
> to this message.
>
> Included in the attached file are engine logs, host-deploy logs and
> VDSM logs for both test hosts.
>
> From a quick look inside it seems the issue is with VDSM failing to start.
>
> Thanks,
>
> --
> Barak Korren
> bkor...@redhat.com
> RHCE, RHCi, RHV-DevOps Team
> https://ifireball.wordpress.com/
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Eyal Edri
New patch [1] fixed 4.0.
Let me know if you want to merge it or try to fix the repos for 4.0 instead.


[1] https://gerrit.ovirt.org/#/c/59603/

On Wed, Jun 22, 2016 at 3:03 PM, Eyal Edri  wrote:

>
>
> On Wed, Jun 22, 2016 at 2:55 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Wed, Jun 22, 2016 at 2:49 PM, Eyal Edri  wrote:
>>
>>>
>>>
>>> On Wed, Jun 22, 2016 at 2:27 PM, Yaniv Kaul  wrote:
>>>


 On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri  wrote:

> After the recent merge of testing install of ovirt-cockpit-dashboard
> to 3.6,
> the 4.0 tests failed also. [1]
>

 They should not have failed. The only reason they could have failed is
 if there were added deps - and there were - to the dashboard.
 Specifically, hosted-engine-setup was added as a dep, bringing with it
 a huge amount of other packages (virt-viewer, which although I requested
 was not removed, which brings spice, which brings GTK...) - overall, ~500
 RPMs (!).

>>>
>>>
>>> They failed because 4.0 was linked to 3.6, I've posted this fix:
>>> https://gerrit.ovirt.org/#/c/59603/
>>> When we'll make it work for 4.0, we'll restore the link.
>>>
>>
>> So 4.0 was broken? Because I've tested on BOTH 3.6 and master - see my
>> comment @ https://gerrit.ovirt.org/#/c/58775/
>>
>
> Yes, was missing packages:
>
>
> http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/testReport/junit/(root)/002_bootstrap/install_cockpit_ovirt/
>
> --> Processing Dependency: virt-viewer for package: 
> ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
> --> Processing Dependency: socat for package: 
> ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
> --> Processing Dependency: python-libguestfs for package: 
> ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
> --> Finished Dependency Resolution
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
> lago.ssh: DEBUG: Command a953e492 on lago_basic_suite_4_0_host0  errors:
>  Error: Package: 
> ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
>  (alocalsync)
>Requires: virt-viewer
> Error: Package: cockpit-ovirt-dashboard-0.10.5-1.0.0.el7.centos.noarch 
> (alocalsync)
>Requires: cockpit
> Error: Package: 
> ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
>  (alocalsync)
>Requires: socat
> Error: Package: 
> ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
>  (alocalsync)
>Requires: python-libguestfs
>
> lago.utils: ERROR: Error while running thread
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 53, in 
> _ret_via_queue
> queue.put({'return': func()})
>   File 
> "/home/jenkins/workspace/ovirt_4.0_system-tests/ovirt-system-tests/basic_suite_4.0/test-scenarios/002_bootstrap.py",
>  line 163, in _install_cockpit_ovirt_on_host
> nt.assert_equals(ret.code, 0, '_install_cockpit_ovirt_on_host(): failed 
> to install cockpit-ovirt-dashboard on host %s' % host)
>   File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual
>
>
>
>> Y.
>>
>>
>>>

 This is the reason I thought of abandoning this patch - which I think
 I've commented on in the patch itself.


> And then I found out that some of the 4.0 tests are linked to 3.6
> still, is this intentional?
>

 Yes, for two reasons:
 1. It allows less code duplication.
 2. It allows us to test 4.0 with v3 API.
 3. It allows us to compare 4.0 to 3.6.x.

>>>
>>> I'm fully aware of these, but the fact we might have different tests /
>>> deps for different version will require us at some point to split it.
>>> And later on do some refactor to make sure we're sharing what we can to
>>> all tests.
>>>
>>>


> Should we now create a new separate 4.0 test or change the link to
> master?
>

 We need at some point to add v4 API tests to 4.0.
 Y.


>
> lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
> ../../basic_suite_master/test-scenarios/001_initialize_engine.py
> lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
> ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
> lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
> ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py
>
>
>
> [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Eyal Edri
On Wed, Jun 22, 2016 at 2:55 PM, Yaniv Kaul  wrote:

>
>
> On Wed, Jun 22, 2016 at 2:49 PM, Eyal Edri  wrote:
>
>>
>>
>> On Wed, Jun 22, 2016 at 2:27 PM, Yaniv Kaul  wrote:
>>
>>>
>>>
>>> On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri  wrote:
>>>
 After the recent merge of testing install of ovirt-cockpit-dashboard to
 3.6,
 the 4.0 tests failed also. [1]

>>>
>>> They should not have failed. The only reason they could have failed is
>>> if there were added deps - and there were - to the dashboard.
>>> Specifically, hosted-engine-setup was added as a dep, bringing with it a
>>> huge amount of other packages (virt-viewer, which although I requested was
>>> not removed, which brings spice, which brings GTK...) - overall, ~500 RPMs
>>> (!).
>>>
>>
>>
>> They failed because 4.0 was linked to 3.6, I've posted this fix:
>> https://gerrit.ovirt.org/#/c/59603/
>> When we'll make it work for 4.0, we'll restore the link.
>>
>
> So 4.0 was broken? Because I've tested on BOTH 3.6 and master - see my
> comment @ https://gerrit.ovirt.org/#/c/58775/
>

Yes, was missing packages:

http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/testReport/junit/(root)/002_bootstrap/install_cockpit_ovirt/

--> Processing Dependency: virt-viewer for package:
ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
--> Processing Dependency: socat for package:
ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
--> Processing Dependency: python-libguestfs for package:
ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
--> Finished Dependency Resolution
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest

lago.ssh: DEBUG: Command a953e492 on lago_basic_suite_4_0_host0  errors:
 Error: Package:
ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
(alocalsync)
   Requires: virt-viewer
Error: Package: cockpit-ovirt-dashboard-0.10.5-1.0.0.el7.centos.noarch
(alocalsync)
   Requires: cockpit
Error: Package:
ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
(alocalsync)
   Requires: socat
Error: Package:
ovirt-hosted-engine-setup-2.0.0.3-0.0.master.20160616133444.gitd50de9a.el7.centos.noarch
(alocalsync)
   Requires: python-libguestfs

lago.utils: ERROR: Error while running thread
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 53, in
_ret_via_queue
queue.put({'return': func()})
  File 
"/home/jenkins/workspace/ovirt_4.0_system-tests/ovirt-system-tests/basic_suite_4.0/test-scenarios/002_bootstrap.py",
line 163, in _install_cockpit_ovirt_on_host
nt.assert_equals(ret.code, 0, '_install_cockpit_ovirt_on_host():
failed to install cockpit-ovirt-dashboard on host %s' % host)
  File "/usr/lib64/python2.7/unittest/case.py", line 551, in assertEqual
assertion_func(first, second, msg=msg)
  File "/usr/lib64/python2.7/unittest/case.py", line 544, in _baseAssertEqual



> Y.
>
>
>>
>>>
>>> This is the reason I thought of abandoning this patch - which I think
>>> I've commented on in the patch itself.
>>>
>>>
 And then I found out that some of the 4.0 tests are linked to 3.6
 still, is this intentional?

>>>
>>> Yes, for two reasons:
>>> 1. It allows less code duplication.
>>> 2. It allows us to test 4.0 with v3 API.
>>> 3. It allows us to compare 4.0 to 3.6.x.
>>>
>>
>> I'm fully aware of these, but the fact we might have different tests /
>> deps for different version will require us at some point to split it.
>> And later on do some refactor to make sure we're sharing what we can to
>> all tests.
>>
>>
>>>
>>>
 Should we now create a new separate 4.0 test or change the link to
 master?

>>>
>>> We need at some point to add v4 API tests to 4.0.
>>> Y.
>>>
>>>

 lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
 ../../basic_suite_master/test-scenarios/001_initialize_engine.py
 lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
 ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
 lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
 ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py



 [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console

 --
 Eyal Edri
 Associate Manager
 RHEV DevOps
 EMEA ENG Virtualization R
 Red Hat Israel

 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)

>>>
>>>
>>
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHEV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red 

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Yaniv Kaul
On Wed, Jun 22, 2016 at 2:49 PM, Eyal Edri  wrote:

>
>
> On Wed, Jun 22, 2016 at 2:27 PM, Yaniv Kaul  wrote:
>
>>
>>
>> On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri  wrote:
>>
>>> After the recent merge of testing install of ovirt-cockpit-dashboard to
>>> 3.6,
>>> the 4.0 tests failed also. [1]
>>>
>>
>> They should not have failed. The only reason they could have failed is if
>> there were added deps - and there were - to the dashboard.
>> Specifically, hosted-engine-setup was added as a dep, bringing with it a
>> huge amount of other packages (virt-viewer, which although I requested was
>> not removed, which brings spice, which brings GTK...) - overall, ~500 RPMs
>> (!).
>>
>
>
> They failed because 4.0 was linked to 3.6, I've posted this fix:
> https://gerrit.ovirt.org/#/c/59603/
> When we'll make it work for 4.0, we'll restore the link.
>

So 4.0 was broken? Because I've tested on BOTH 3.6 and master - see my
comment @ https://gerrit.ovirt.org/#/c/58775/
Y.


>
>>
>> This is the reason I thought of abandoning this patch - which I think
>> I've commented on in the patch itself.
>>
>>
>>> And then I found out that some of the 4.0 tests are linked to 3.6 still,
>>> is this intentional?
>>>
>>
>> Yes, for two reasons:
>> 1. It allows less code duplication.
>> 2. It allows us to test 4.0 with v3 API.
>> 3. It allows us to compare 4.0 to 3.6.x.
>>
>
> I'm fully aware of these, but the fact we might have different tests /
> deps for different version will require us at some point to split it.
> And later on do some refactor to make sure we're sharing what we can to
> all tests.
>
>
>>
>>
>>> Should we now create a new separate 4.0 test or change the link to
>>> master?
>>>
>>
>> We need at some point to add v4 API tests to 4.0.
>> Y.
>>
>>
>>>
>>> lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
>>> ../../basic_suite_master/test-scenarios/001_initialize_engine.py
>>> lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
>>> ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
>>> lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
>>> ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py
>>>
>>>
>>>
>>> [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console
>>>
>>> --
>>> Eyal Edri
>>> Associate Manager
>>> RHEV DevOps
>>> EMEA ENG Virtualization R
>>> Red Hat Israel
>>>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>>
>>
>>
>
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Eyal Edri
On Wed, Jun 22, 2016 at 2:27 PM, Yaniv Kaul  wrote:

>
>
> On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri  wrote:
>
>> After the recent merge of testing install of ovirt-cockpit-dashboard to
>> 3.6,
>> the 4.0 tests failed also. [1]
>>
>
> They should not have failed. The only reason they could have failed is if
> there were added deps - and there were - to the dashboard.
> Specifically, hosted-engine-setup was added as a dep, bringing with it a
> huge amount of other packages (virt-viewer, which although I requested was
> not removed, which brings spice, which brings GTK...) - overall, ~500 RPMs
> (!).
>


They failed because 4.0 was linked to 3.6, I've posted this fix:
https://gerrit.ovirt.org/#/c/59603/
When we'll make it work for 4.0, we'll restore the link.


>
> This is the reason I thought of abandoning this patch - which I think I've
> commented on in the patch itself.
>
>
>> And then I found out that some of the 4.0 tests are linked to 3.6 still,
>> is this intentional?
>>
>
> Yes, for two reasons:
> 1. It allows less code duplication.
> 2. It allows us to test 4.0 with v3 API.
> 3. It allows us to compare 4.0 to 3.6.x.
>

I'm fully aware of these, but the fact we might have different tests / deps
for different version will require us at some point to split it.
And later on do some refactor to make sure we're sharing what we can to all
tests.


>
>
>> Should we now create a new separate 4.0 test or change the link to master?
>>
>
> We need at some point to add v4 API tests to 4.0.
> Y.
>
>
>>
>> lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
>> ../../basic_suite_master/test-scenarios/001_initialize_engine.py
>> lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
>> ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
>> lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
>> ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py
>>
>>
>>
>> [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console
>>
>> --
>> Eyal Edri
>> Associate Manager
>> RHEV DevOps
>> EMEA ENG Virtualization R
>> Red Hat Israel
>>
>> phone: +972-9-7692018
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
>


-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Yaniv Kaul
On Wed, Jun 22, 2016 at 12:28 PM, Eyal Edri  wrote:

> After the recent merge of testing install of ovirt-cockpit-dashboard to
> 3.6,
> the 4.0 tests failed also. [1]
>

They should not have failed. The only reason they could have failed is if
there were added deps - and there were - to the dashboard.
Specifically, hosted-engine-setup was added as a dep, bringing with it a
huge amount of other packages (virt-viewer, which although I requested was
not removed, which brings spice, which brings GTK...) - overall, ~500 RPMs
(!).

This is the reason I thought of abandoning this patch - which I think I've
commented on in the patch itself.


> And then I found out that some of the 4.0 tests are linked to 3.6 still,
> is this intentional?
>

Yes, for two reasons:
1. It allows less code duplication.
2. It allows us to test 4.0 with v3 API.
3. It allows us to compare 4.0 to 3.6.x.


> Should we now create a new separate 4.0 test or change the link to master?
>

We need at some point to add v4 API tests to 4.0.
Y.


>
> lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
> ../../basic_suite_master/test-scenarios/001_initialize_engine.py
> lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
> ../../basic_suite_3.6/test-scenarios/002_bootstrap.py
> lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
> ../../basic_suite_3.6/test-scenarios/004_basic_sanity.py
>
>
>
> [1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console
>
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] ovirt-system tests 4.0 using 3.6 code

2016-06-22 Thread Eyal Edri
After the recent merge of testing install of ovirt-cockpit-dashboard to 3.6,
the 4.0 tests failed also. [1]

And then I found out that some of the 4.0 tests are linked to 3.6 still, is
this intentional?
Should we now create a new separate 4.0 test or change the link to master?

lrwxrwxrwx. 1 eedri eedri 64 Jun 22 00:12 001_initialize_engine.py ->
../../basic_suite_master/test-scenarios/001_initialize_engine.py
lrwxrwxrwx. 1 eedri eedri 53 Jun 22 00:12 002_bootstrap.py ->
../../basic_suite_3.6/test-scenarios/002_bootstrap.py
lrwxrwxrwx. 1 eedri eedri 56 Jun 22 00:12 004_basic_sanity.py ->
../../basic_suite_3.6/test-scenarios/004_basic_sanity.py



[1] http://jenkins.ovirt.org/job/ovirt_4.0_system-tests/72/console

-- 
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

[ovirt-devel] Ovirt system tests review recording

2016-05-18 Thread David Caro

Here you have the recording of the ovirt system tests review meeting:

https://bluejeans.com/s/9FaJ/


There will be another one with a hands-on focus on which you'll finish running
the tests on your laptop yourself.

Enjoy!

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel