[JIRA] (OVIRT-621) Upgrade of the fc21 vms required to run fc24 chroots

2016-07-05 Thread David Caro (oVirt JIRA)
David Caro created OVIRT-621:


 Summary: Upgrade of the fc21 vms required to run fc24 chroots
 Key: OVIRT-621
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-621
 Project: oVirt - virtualization made easy
  Issue Type: By-EMAIL
Reporter: David Caro
Assignee: infra
 Attachments: signature.asc

The issue being that in order to run the fc24 chroots, it needs the correct gpg
key for the fc24 repos, and that is included only in a newer mock version, that
will not be released for fc21 (as it reached it's end of life).

So possibilities are:

* manually deploy the keys (puppet?)
* build a custom mock package with the newer version and install that
* upgrade the slaves (something that we should do anyhow, sooner than later)
* remove all the fc24 jobs we added (lago in specific)


In the meantime, some fc24 builds can't be done (for example, lago is blocked
by this).


Cheers!

-- 
David Caro



--
This message was sent by Atlassian JIRA
(v1000.126.1#14)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [Jenkins] Passing parameters to build-artifacts.sh

2016-06-22 Thread David Caro
On 06/22 19:21, Nir Soffer wrote:
> On Wed, Jun 22, 2016 at 6:47 PM, Barak Korren <bkor...@redhat.com> wrote:
> > This could be done, but not trival to do, and also requires you to know,
> > before merging, that this is the patch you are gonna release.
> >
> > A differnt but somewhat common practice is to use git tagging and 'git
> > describe' to set the package version.
> > We can make build_artifacts trigger when a tag is pushed, AFAIK Lago already
> > does that...
> >
> > בתאריך 22 ביוני 2016 18:39,‏ "Vojtech Szocs" <vsz...@redhat.com> כתב:
> >
> >> Hi,
> >>
> >> I'm just curious whether it's possible to do the following:
> >>
> >> Let's say we have a project (ovirt-engine-dashboard) built by Jenkins,
> >> which means there's a Jenkins job that runs build-artifacts.sh script
> >> whenever a patch gets merged via gerrit.
> >>
> >> Can we somehow pass custom parameters to build-artifacts.sh for such
> >> (Jenkins CI) builds?
> >>
> >> For example, putting something like this into commit message:
> >>
> >>   My-Param 123
> >>
> >> would reflect into `My-Param` env. variable when running the script?
> >>
> >> Motivation: for release builds (which shouldn't contain the "snapshot"
> >> part [*] in RPM release string), pass parameter to build-artifacts.sh
> >> that ensures the "snapshot" part is empty. This way, we don't need to
> >> patch the project prior to release (remove "snapshot" in spec) & then
> >> patch it again after the release (re-add "snapshot" in spec).
> >>
> >> [*] {date}git{commit}
> 
> How about adding a flag to the project yaml?
> 
> For example:
> 
> version:
>   - master:
>   branch: master
>   - 0.16:
>   branch: ioprocess-0.16
>   release: true
> 
> Then run build-artifacts with RELEASE=1 environment variable, so we can
> tell that this is a release build, and create release friendly rpms?

That's not better than adding a commit to the project for each release imo.

I'd go for the tag thingie actually, just detecting that you are in a tag to
control if the extra 'snapshot' should be added or not.

> 
> Nir
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Another Jenkins failure

2016-06-16 Thread David Caro
On 06/16 10:10, Yevgeny Zaspitsky wrote:
> Hi All,
> 
> Could some one help me to understand what actually failed in the build [1]?
> 
> [1]
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/223/
> 
> Thanks in advance,
> Yevgeny


From the console log, it seems that the engine did not come up after the setup:


FAILED::UPGRADE::WAIT_FOR_ENGINE:: Unrecoverable failure, exitting


From the engine.log [1]


2016-06-16 06:54:30,026 ERROR 
[org.ovirt.engine.core.bll.network.macpool.MacPoolPerCluster] (ServerService 
Thread Pool -- 48) [] Error initializing: PreparedStatementCallback; bad SQL 
grammar [select * from  getallmacsbymacpoolid(?)]; nested exception is 
org.postgresql.util.PSQLException: ERROR: column c.mac_pool_id does not exist
  Where: PL/pgSQL function getallmacsbymacpoolid(uuid) line 3 at RETURN QUERY



[1] 
http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/223/artifact/logs/_var_log_ovirt-engine_engine.log.tgz


> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-system-tests 4.0 reposync failure fixed

2016-06-16 Thread David Caro
On 06/16 00:24, Eyal Edri wrote:
> FYI,
> 
> I managed to fix the reposync error with some hacking locally on the hosts
> and also refreshing the repo on resources.ovirt.org.

What did you do?
Shlomi and I removed the cached repos (/var/lib/lago/reposync/ovirt*), and I
regenerated the metadata for the 4.0-snapshot repo on resources.
What were we missing?

> 
> if you see these jobs failing again please let me know.
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Running arquillian tests on gerrit reviews

2016-06-15 Thread David Caro Estevez
On 06/15 14:36, Roman Mohr wrote:
> Hi,
> 
> Recently we merged some patches which can run arquillian unit tests
> with a database (this means running ovirt commands without mocking and
> working injections). They do a commit rollback after each test in the
> db like th DAO tests:
> 
> https://gerrit.ovirt.org/#/q/status:merged+project:ovirt-engine+branch:master+topic:integration
> 
> I am wondering if we can include them in the jenkins/gerrit flow. They
> can be executed like this:
> 
> mvn clean verify -DskipITs=false
> 
> This command also makes sure that the database is up to date by
> applying the database scripts.
> 
> Would be glad to get some feedback on how to integrate them in our CI flows.

To do so you just have to add that command to the automation/check-patch.sh
script in the ovirt-engine git repo (you should also fail if it fails and such)

> 
> Roman
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-engine-dashboard] Request stable branch creation

2016-06-14 Thread David Caro
On 06/14 12:29, Vojtech Szocs wrote:
> 
> 
> - Original Message -
> > From: "Eyal Edri" <ee...@redhat.com>
> > To: "Vojtech Szocs" <vsz...@redhat.com>
> > Cc: "infra" <infra@ovirt.org>
> > Sent: Tuesday, June 14, 2016 6:07:51 PM
> > Subject: Re: [ovirt-engine-dashboard] Request stable branch creation
> > 
> > Infra doesn't create branches for projects,  that is done by each project
> > maintainer and you should already have permission to do it in gerrit.
> > 
> > Just create a local branch and push it.
> 
> Sorry, I wasn't able to do that:
> 
> $ git push origin HEAD:refs/for/ovirt-engine-dashboard-1.0
> ! [remote rejected] HEAD -> refs/for/ovirt-engine-dashboard-1.0 (branch 
> ovirt-engine-dashboard-1.0 not found)
> 
> $ git push origin ovirt-engine-dashboard-1.0
> ! [remote rejected] ovirt-engine-dashboard-1.0 -> ovirt-engine-dashboard-1.0 
> (prohibited by Gerrit)
> 
> Gerrit 2.9+ should have UI for creating branches:
> http://stackoverflow.com/q/20606359
> 
> But when I go to Projects / List /  / Branches,
> I don't see any button to create a new branch.
> 
> Also, according to https://gerrit-review.googlesource.com/#/c/52500/
> there should be ssh command "create-branch" which I assume don't have
> access to.
> 
> Please advise, thanks.


I've added you to the owners (the whole group), you should be able to do both,
create the branch from cli and ui, can you check? (you might have to refresh
the browser cache)

> 
> > 
> > Also,  any ci changes needed to support it needs to be initiated by the
> > maintainer while if any help is needed,  of course we can assist with any
> > gap on knowledge or process.
> > On Jun 14, 2016 6:39 PM, "Vojtech Szocs" <vsz...@redhat.com> wrote:
> > 
> > > Hi,
> > >
> > > please create a stable branch, based on master, for 
> > > ovirt-engine-dashboard:
> > >
> > >   ovirt-engine-dashboard-1.0
> > >
> > > Once done, I'll update Jenkins nightly publisher configs according to [1].
> > >
> > > [1] https://gerrit.ovirt.org/#/c/59150/
> > >
> > > As for stable branch maintainers, I'm proposing Alex (awels) with me
> > > (vszocs) as backup.
> > >
> > > Thanks,
> > > Vojtech
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> > >
> > >
> > >
> > 
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: jenkins check patch broken

2016-06-10 Thread David Caro
On 06/10 12:15, Michal Skrivanek wrote:
> Seems there is some infra issue:
> 
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2784/console


The error is:

10:07:27 py27 runtests: commands[0] | 
/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/tox.sh pep8
10:07:48 ./vdsm_hooks/vmdisk/before_vm_start.py:73:80: E501 line too long (86 > 
79 characters)



> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: 4.0 branch rights

2016-06-07 Thread David Caro Estevez
Already sent and merged, you should now have all the jobs running for 4.0 
branch too

- Original Message -
> From: "David Caro Estevez" <dcaro...@redhat.com>
> To: "Eyal Edri" <ee...@redhat.com>
> Cc: "Francesco Romani" <from...@redhat.com>, "infra" <infra@ovirt.org>
> Sent: Tuesday, June 7, 2016 3:08:30 PM
> Subject: Re: 4.0 branch rights
> 
> Sure, all you have to do is edit the file
> jobs/confs/projects/vdsm/vdsm_standard.yaml in the jenkins repo, and add
> there a version '4.0' with the branch ovirt-4.0 on each of the three
> projects defined there, for example for the first one:
> 
> 
> - project:
> <<: *vdsm_standard_common
> name: vdsm_build-artifacts
> version:
>   - master:
>   branch: master
>   - 3.6:
>   branch: ovirt-3.6
>   - 4.0:
>   branch: ovirt-4.0
> stage: build-artifacts
> distro:
>   - el7
>   - fc23
>   - fc22
> exclude:
>   - version: master
>     distro: fc22
> 
> 
> - Original Message -
> > From: "Eyal Edri" <ee...@redhat.com>
> > To: "Francesco Romani" <from...@redhat.com>
> > Cc: "David Caro Estevez" <dcaro...@redhat.com>, "infra" <infra@ovirt.org>
> > Sent: Tuesday, June 7, 2016 3:03:33 PM
> > Subject: Re: 4.0 branch rights
> > 
> > David, can you help with sending a patch to add 4.0 jobs for vdsm?
> > 
> > On Tue, Jun 7, 2016 at 3:54 PM, Francesco Romani <from...@redhat.com>
> > wrote:
> > 
> > > - Original Message -
> > > > From: "David Caro Estevez" <dcaro...@redhat.com>
> > > > To: "Francesco Romani" <from...@redhat.com>
> > > > Cc: "Yaniv Bronheim" <ybron...@redhat.com>, "infra" <infra@ovirt.org>
> > > > Sent: Tuesday, June 7, 2016 11:10:32 AM
> > > > Subject: Re: 4.0 branch rights
> > > >
> > > > The automated ci is added on jenkins, not related to gerrit permissions
> > > or
> > > > hooks
> > >
> > > Right! Who should I ask for help about that?
> > >
> > > Thanks,
> > >
> > >
> > > --
> > > Francesco Romani
> > > RedHat Engineering Virtualization R & D
> > > Phone: 8261328
> > > IRC: fromani
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> > >
> > >
> > >
> > 
> > 
> > --
> > Eyal Edri
> > Associate Manager
> > RHEV DevOps
> > EMEA ENG Virtualization R
> > Red Hat Israel
> > 
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> > 
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: 4.0 branch rights

2016-06-07 Thread David Caro Estevez
Sure, all you have to do is edit the file 
jobs/confs/projects/vdsm/vdsm_standard.yaml in the jenkins repo, and add there 
a version '4.0' with the branch ovirt-4.0 on each of the three projects defined 
there, for example for the first one:


- project:
<<: *vdsm_standard_common
name: vdsm_build-artifacts
version:
  - master:
  branch: master
  - 3.6:
  branch: ovirt-3.6
  - 4.0:
  branch: ovirt-4.0
stage: build-artifacts
distro:
  - el7
  - fc23
  - fc22
exclude:
  - version: master
distro: fc22


- Original Message -
> From: "Eyal Edri" <ee...@redhat.com>
> To: "Francesco Romani" <from...@redhat.com>
> Cc: "David Caro Estevez" <dcaro...@redhat.com>, "infra" <infra@ovirt.org>
> Sent: Tuesday, June 7, 2016 3:03:33 PM
> Subject: Re: 4.0 branch rights
> 
> David, can you help with sending a patch to add 4.0 jobs for vdsm?
> 
> On Tue, Jun 7, 2016 at 3:54 PM, Francesco Romani <from...@redhat.com> wrote:
> 
> > - Original Message -
> > > From: "David Caro Estevez" <dcaro...@redhat.com>
> > > To: "Francesco Romani" <from...@redhat.com>
> > > Cc: "Yaniv Bronheim" <ybron...@redhat.com>, "infra" <infra@ovirt.org>
> > > Sent: Tuesday, June 7, 2016 11:10:32 AM
> > > Subject: Re: 4.0 branch rights
> > >
> > > The automated ci is added on jenkins, not related to gerrit permissions
> > or
> > > hooks
> >
> > Right! Who should I ask for help about that?
> >
> > Thanks,
> >
> >
> > --
> > Francesco Romani
> > RedHat Engineering Virtualization R & D
> > Phone: 8261328
> > IRC: fromani
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >
> 
> 
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Duplicate upgrade scripts issue

2016-06-07 Thread David Caro Estevez
- Original Message -

> From: "Martin Perina" 
> To: "Eli Mesika" 
> Cc: "Eyal Edri" , "infra" 
> Sent: Tuesday, June 7, 2016 11:45:28 AM
> Subject: Re: Duplicate upgrade scripts issue

> On Tue, Jun 7, 2016 at 11:42 AM, Eli Mesika < emes...@redhat.com > wrote:

> > Ha , one more thing :
> 

> > We would like to force the existence of such hooks if possible ...
> 

> ​+1

> The hook should be included by default after git clone​ if possible. And if
> it's then I'd also force inclusion of commit message hook which generates
> change-id

Git does not allow 'autodownloading' hooks, you can't distribute them by git 
clone, the client must explicitly install them herself 

> > On Tue, Jun 7, 2016 at 12:41 PM, Eli Mesika < emes...@redhat.com > wrote:
> 

> > > Hi guys
> > 
> 

> > > I have talked with Eyal about the $Subject and he asked me to write and
> > > send
> > > this email
> > 
> 

> > > As you probably know, we have from time to time an issue with duplicate
> > > upgrade scripts that are merged by mistake, each such issue forces us to
> > > publish a fixing patch that renames the duplicated file.
> > 
> 
> > > I was discussed this issue today with Marin P on out weekly meeting
> > 
> 

> > > We would like to write some kind of a hook that will check on each patch
> > > set
> > > if it has DB upgrade files and rename them (if necessary) such that it
> > > will
> > > have the correct numbering according to the last existing upgrade patch
> > > on
> > > the related branch.
> > 
> 

> > > The hook should be done upon 'git push' request so it will also prevent
> > > CI
> > > tests to fail on this issue
> > 
> 

> > > I will be happy to get your ideas/comments on that
> > 
> 

> > > Thanks
> > 
> 

> > > Eli Mesika
> > 
> 

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Enhancing std-ci for deployment (std-cd)

2016-06-07 Thread David Caro Estevez


- Original Message -
> From: "Barak Korren" <bkor...@redhat.com>
> To: "David Caro Estevez" <dcaro...@redhat.com>
> Cc: "infra" <infra@ovirt.org>
> Sent: Tuesday, June 7, 2016 11:31:17 AM
> Subject: Re: Enhancing std-ci for deployment (std-cd)
> 
> >
> > I'd go a 4th way:
> >
> > * For the non-merged patches, use lago or similar instead of deploying into
> > prod foreman, though it might be a bit cumbersome to generate the env, for
> > most cases, it's way more flexible, and a lot less risky
> 
> This would probably mean all tests would need to be automated, while a
> worthy goal, this is not practical in the short term IMO.
> 

I don't think we should invest time in automating any other solution, that 
would mean not just not working on that one, but actually burying it under 
extra effort to adapt whatever 'temporary short term' solution was used instead.

> > * For the merged patches, I'd use a 'passive' deployment, where the scripts
> > with the deploy logic reside on foreman and are activated by jenkins (for
> > example, by ssh to the slave, similar to how we deploy there today). That
> > puts the deploy logic on the server where it should be deployed. Most
> > probably using the same or very similar script on the non-merged checks to
> > deploy to the virtual environment. This leaves a clean yaml, keeps a
> > strict security (only a specific ssh user with the correct private key can
> > do it, and it can only run that script and nothing else), and maintain the
> > infra config details out of the source code.
> >
> 
> While I agree that infra details should be kept outside the source
> repo. This seems to create the situation where all deployment logic
> will also permanently reside outside of it. I want the deployment
> logic to be self-contained and movable. I'm actually looking at this
> right now because I want to deploy the Puppet code on the DS Sat6 and
> not the US foreman.

I don't think you should use the same deploy procedure on upstream foreman and 
ds satellite, each env has it's own particularities, and unless you want to 
deploy the whole env (like deploying full vms, or containers) it's no worth imo 
try to keep such a generic deploy script, given into account all the 
limitations and maintenance that genericness requires.

> I can see the security benefits of the keyed ssh commands, but I'm not
> sure those are required in all cases and outweigh the lack of
> transparency in the logic and the probable need for manual
> maintenance.
> 

I don't think the manual maintenance will be that high, the deploy scripts can 
be easily puppetized themselves. And imo, upstream the ssh command are more 
than required, they should be a bare minimum.

> 
> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: 4.0 branch rights

2016-06-07 Thread David Caro Estevez
The automated ci is added on jenkins, not related to gerrit permissions or hooks

- Original Message -
> From: "Francesco Romani" <from...@redhat.com>
> To: "David Caro Estevez" <dcaro...@redhat.com>
> Cc: "Yaniv Bronheim" <ybron...@redhat.com>, "infra" <infra@ovirt.org>
> Sent: Tuesday, June 7, 2016 11:06:31 AM
> Subject: Re: 4.0 branch rights
> 
> Thanks David,
> 
> I have now CI+1, +2 and merge rights!
> 
> No automated CI runs still, it seems.
> 
> Bests,
> 
> - Original Message -
> > From: "David Caro Estevez" <dcaro...@redhat.com>
> > To: "Yaniv Bronheim" <ybron...@redhat.com>
> > Cc: "infra" <infra@ovirt.org>
> > Sent: Tuesday, June 7, 2016 10:47:51 AM
> > Subject: Re: 4.0 branch rights
> > 
> > You should have those rights now, let me know if you have issues
> > 
> > - Original Message -
> > > From: "Yaniv Bronheim" <ybron...@redhat.com>
> > > To: "infra" <infra@ovirt.org>
> > > Sent: Tuesday, June 7, 2016 10:02:10 AM
> > > Subject: 4.0 branch rights
> > > 
> > > Hi,
> > > I need rights to merge and create tags in vdsm ovirt-4.0 branch.
> > > 
> > > Thanks in advance
> > > 
> > > --
> > > Yaniv Bronhaim.
> > > 
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> > > 
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> > 
> 
> --
> Francesco Romani
> RedHat Engineering Virtualization R & D
> Phone: 8261328
> IRC: fromani
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Enhancing std-ci for deployment (std-cd)

2016-06-07 Thread David Caro Estevez


- Original Message -
> From: "Barak Korren" 
> To: "infra" 
> Sent: Tuesday, June 7, 2016 10:45:06 AM
> Subject: Enhancing std-ci for deployment (std-cd)
> 
> Hi all,
> 
> I'm contemplating the best way to enable including deployment logic in
> standard-CI scripts.
> 

I'm working on a first POC of something similar to that right now, deploying 
engine rpms to an 'experimental' repo on build-artifacts success

> Case to the point - embedding the deployment logic of our infra-puppet
> repo. One thing to note about this, is that deployment in this
> scenario can happen either post-merge (Like it does today) or
> pre-merge (Create a per-patch puppet env to enable easy testing)
> 
> I can think of a few ways to go about this:
> 
> 1. Copy the full generated puppet configuration into
> 'exported-artifacts' and add logic to the YAML to copy it to the
> foreman server.
> 
> The main shortcoming of this is that we will have to maintain quite a
> bit of custom logic in the YAML. This beats the purpose of embedding
> the logic in the source repo in the 1st place.
> 
> 2. Mount the '/etc/puppet' directory into the chrrot
> 
> This will require having the foreman be a Jenkins slave and some
> custom YAML to ensure the jobs run on it (not a big deal IMO)
> 
> The shortcoming is that running tests locally with mock_runner would
> be cumbersome (It will touch your local /etc/puppet directory and
> probably fail). Another issue is that we will have to find a way to
> figure out Gerrit patch information from inside mock. Possibly we
> could use the commit message or git hash for that.
> 
> 3. Invent some kind of a new deploy_*.sh script
> 
> This makes it possible to run the checking code locally without the
> deployment code. The YAML changes for this could be quite generic and
> shared with other projects. We could possibly also invent a
> 'deploy_*.target' to specify where to run the deploy script (E.g. a
> Jenkins label).
> 
> We could even consider not running the script inside mock, though I
> think mock's benefits outweigh the limits it imposes on accessing the
> outside system (which can be mostly bypassed anyway with bind mounts).
> 
> So,
> WDYT?
> 

I'd go a 4th way:

* For the non-merged patches, use lago or similar instead of deploying into 
prod foreman, though it might be a bit cumbersome to generate the env, for most 
cases, it's way more flexible, and a lot less risky
* For the merged patches, I'd use a 'passive' deployment, where the scripts 
with the deploy logic reside on foreman and are activated by jenkins (for 
example, by ssh to the slave, similar to how we deploy there today). That puts 
the deploy logic on the server where it should be deployed. Most probably using 
the same or very similar script on the non-merged checks to deploy to the 
virtual environment. This leaves a clean yaml, keeps a strict security (only a 
specific ssh user with the correct private key can do it, and it can only run 
that script and nothing else), and maintain the infra config details out of the 
source code.

> --
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: 4.0 branch rights

2016-06-07 Thread David Caro Estevez
You should have those rights now, let me know if you have issues

- Original Message -
> From: "Yaniv Bronheim" 
> To: "infra" 
> Sent: Tuesday, June 7, 2016 10:02:10 AM
> Subject: 4.0 branch rights
> 
> Hi,
> I need rights to merge and create tags in vdsm ovirt-4.0 branch.
> 
> Thanks in advance
> 
> --
> Yaniv Bronhaim.
> 
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [Vdsm] infra support for the new stable branch ovirt-4.0

2016-06-07 Thread David Caro Estevez
Shlomi is sick today, so I've changed it.

The hooks were already enabled (maybe Shlomi had time to do that :) ), just had 
to add the privileges to the branch. It has now the same treatment than the 3.6 
one. Let me know if you miss anything or want something to work differently.

Cheers!

- Original Message -
> From: "Eyal Edri" 
> To: "Dan Kenigsberg" , "Shlomo Ben David" 
> , infra-supp...@ovirt.org
> Cc: "infra" 
> Sent: Monday, June 6, 2016 8:24:36 AM
> Subject: Re: [Vdsm] infra support for the new stable branch ovirt-4.0
> 
> Shlomi,
> Can you handle the permissions adding on the VDSM project for the new branch?
> We'll also need to enable hooks for the ovirt-4.0 branch.
> 
> e.
> 
> On Sun, Jun 5, 2016 at 10:51 AM, Dan Kenigsberg < dan...@redhat.com > wrote:
> 
> 
> On Sat, Jun 04, 2016 at 04:16:18PM +0300, Nir Soffer wrote:
> > On Fri, Jun 3, 2016 at 10:58 AM, Francesco Romani < from...@redhat.com >
> > wrote:
> > > Hi Infra,
> > > 
> > > (Dan, please ACK/NACK the following)
> > > 
> > > I'm not sure this is already been worked on, or if it was already
> > > configured automatically,
> > > sending just in case to be sure.
> > > 
> > > Me and Yaniv (CC'd) agreed to continue our maintainer duties and take
> > > care of the ovirt-4.0
> > > Vdsm stable branch which was recently created.
> > > 
> > > I'd like to ask if we have the gerrit permissions and CI jobs ready for
> > > the new
> > > branch.
> > 
> > +1
> +1
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 
> 
> 
> 
> 
> --
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> 
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
> 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Adding a new flag to a specific gerrit project?

2016-06-06 Thread David Caro
On 06/06 21:02, Andrew Dahms wrote:
> Hi David,
> 
> Thank you for following up.
> 
> Let's keep the older one, then, and thank you for calling this out.

Sure, no problem, accounts merged and rights granted, let me know if you have
any other issues.

> 
> Kind regards,
> 
> Andrew Dahms
> Documentation Program Manager
> *Phone:* +61 07-3514-7124
> *Email:* ada...@redhat.com
> 
> 
> 
> On Mon, Jun 6, 2016 at 6:50 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 06/06 09:10, Andrew Dahms wrote:
> > > Hi David,
> > >
> > > Thank you for following up on this thread.
> > >
> > > I would like to keep my ada...@redhat.com account, for which the
> > account ID
> > > is the following:
> > >
> > > 1001289
> > >
> > > Let me know if there is anything else you need to merge my accounts.
> >
> >
> > I have to accounts for you that use the redhat email, the older one
> > (1000582),
> > also has the gmail ids added, the newer one (1001289) only has the redhat
> > email, usually the account that's older is the one you would want to keep,
> > are
> > you sure you want to keep the newer one?
> >
> > >
> > > Kind regards,
> > >
> > > Andrew Dahms
> > > Documentation Program Manager
> > > *Phone:* +61 07-3514-7124
> > > *Email:* ada...@redhat.com
> > >
> > >
> > >
> > > On Fri, Jun 3, 2016 at 5:45 PM, David Caro <dc...@redhat.com> wrote:
> > >
> > > > On 06/03 02:01, Lucy Bopf wrote:
> > > > > ​Hi Juan, David,
> > > > >
> > > > > Thank you for including us on this thread. From my side, I think what
> > > > Juan has suggested makes sense, and should work for our needs.
> > > > >
> > > > > Please also add the following names to the group of users that can
> > set
> > > > this flag:
> > > > >
> > > > > Tahlia Richardson 
> > > > > Megan Lewis 
> > > > > Byron Gravenorst 
> > > > > Julie Wu 
> > > > >
> > > > > Just let me know if you need any further information, or if there are
> > > > any issues.
> > > >
> > > >
> > > > Done, but two of the members have duplicated gerrit accounts:
> > > >
> > > > * Byron Gravenorst
> > > > * Andrew Dahms
> > > >
> > > > To fix it, log into gerrit with the account you want to keep, and go
> > to:
> > > >
> > > > https://gerrit.ovirt.org/#/settings/
> > > >
> > > > There write down the value of the field 'Account ID', and send me an
> > email
> > > > with
> > > > that id, I'll merge the duplicated accounts into that one.
> > > >
> > > > Btw. This usually happens when using a new login method
> > > > (oauth/google/github)
> > > > without having linked it to the existing account, to do so, you must
> > go to:
> > > >
> > > > https://gerrit.ovirt.org/#/settings/web-identities
> > > >
> > > > And hit there the 'link another identity' button, that will ask you to
> > log
> > > > in
> > > > with the new auth method, and will link it to the existing one without
> > > > creating
> > > > a new one each time.
> > > >
> > > >
> > > >
> > > > Cheers!
> > > >
> > > > >
> > > > > Kind Regards,
> > > > >
> > > > > Lucy
> > > > >
> > > > > Lucy Bopf
> > > > > Associate Documentation Program Manager
> > > > > Phone: + 61 07-3514-7181
> > > > > Email: lb...@redhat.com
> > > > >
> > > > > - Original Message -
> > > > >
> > > > > > From: "Juan Hernández" <jhern...@redhat.com>
> > > > > > To: "David Caro" <dc...@redhat.com>, "Andrew Dahms" <
> > ada...@redhat.com
> > > > >,
> > > > > > "Lucy Bopf" <lb...@redhat.com>, "Derek Cadzow" <dcad...@redhat.com
> > >
> > > > > > Cc: "infra" <infra@ovirt.org>, "Yaniv Dary" <yd...@redhat.com>
> > > > > > Sent: Thursday, June 2, 2016 7:11:35 PM
> > > > > > Subject: Re: Adding a new flag to a specific ger

Re: Adding a new flag to a specific gerrit project?

2016-06-06 Thread David Caro
On 06/05 21:14, Lucy Bopf wrote:
> Hi David, 
> 
> Thanks for letting us know. We did notice that Byron's account was having 
> some permissions issues, so this may explain it. 
> 
> The account ID to use for bgrav...@redhat.com is 1001161. 
> 
> Just let me know if there is anything else we need to do on our side. 


Done

> 
> Kind Regards, 
> 
> Lucy 
> 
> Lucy Bopf 
> Associate Documentation Program Manager 
> Phone: + 61 07-3514-7181 
> Email: lb...@redhat.com 
> 
> - Original Message -
> 
> > From: "Andrew Dahms" <ada...@redhat.com>
> > To: "David Caro" <dc...@redhat.com>
> > Cc: "Lucy Bopf" <lb...@redhat.com>, "Juan Hernández" <jhern...@redhat.com>,
> > "infra" <infra@ovirt.org>, "Yaniv Dary" <yd...@redhat.com>, "Derek Cadzow"
> > <dcad...@redhat.com>
> > Sent: Monday, June 6, 2016 9:10:05 AM
> > Subject: Re: Adding a new flag to a specific gerrit project?
> 
> > Hi David,
> 
> > Thank you for following up on this thread.
> 
> > I would like to keep my ada...@redhat.com account, for which the account ID
> > is the following:
> 
> > 1001289
> 
> > Let me know if there is anything else you need to merge my accounts.
> 
> > Kind regards,
> 
> > Andrew Dahms
> > Documentation Program Manager
> > Phone: + 61 07-3514-7124
> > Email: ada...@redhat.com
> 
> > On Fri, Jun 3, 2016 at 5:45 PM, David Caro < dc...@redhat.com > wrote:
> 
> > > On 06/03 02:01, Lucy Bopf wrote:
> > 
> > > > ​Hi Juan, David,
> > 
> > > >
> > 
> > > > Thank you for including us on this thread. From my side, I think what
> > > > Juan
> > > > has suggested makes sense, and should work for our needs.
> > 
> > > >
> > 
> > > > Please also add the following names to the group of users that can set
> > > > this
> > > > flag:
> > 
> > > >
> > 
> > > > Tahlia Richardson 
> > 
> > > > Megan Lewis 
> > 
> > > > Byron Gravenorst 
> > 
> > > > Julie Wu 
> > 
> > > >
> > 
> > > > Just let me know if you need any further information, or if there are 
> > > > any
> > > > issues.
> > 
> 
> > > Done, but two of the members have duplicated gerrit accounts:
> > 
> 
> > > * Byron Gravenorst
> > 
> > > * Andrew Dahms
> > 
> 
> > > To fix it, log into gerrit with the account you want to keep, and go to:
> > 
> 
> > > https://gerrit.ovirt.org/#/settings/
> > 
> 
> > > There write down the value of the field 'Account ID', and send me an email
> > > with
> > 
> > > that id, I'll merge the duplicated accounts into that one.
> > 
> 
> > > Btw. This usually happens when using a new login method
> > > (oauth/google/github)
> > 
> > > without having linked it to the existing account, to do so, you must go 
> > > to:
> > 
> 
> > > https://gerrit.ovirt.org/#/settings/web-identities
> > 
> 
> > > And hit there the 'link another identity' button, that will ask you to log
> > > in
> > 
> > > with the new auth method, and will link it to the existing one without
> > > creating
> > 
> > > a new one each time.
> > 
> 
> > > Cheers!
> > 
> 
> > > >
> > 
> > > > Kind Regards,
> > 
> > > >
> > 
> > > > Lucy
> > 
> > > >
> > 
> > > > Lucy Bopf
> > 
> > > > Associate Documentation Program Manager
> > 
> > > > Phone: + 61 07-3514-7181
> > 
> > > > Email: lb...@redhat.com
> > 
> > > >
> > 
> > > > - Original Message -
> > 
> 
> > > >
> > 
> > > > > From: "Juan Hernández" < jhern...@redhat.com >
> > 
> > > > > To: "David Caro" < dc...@redhat.com >, "Andrew Dahms" <
> > > > > ada...@redhat.com
> > > > > >,
> > 
> > > > > "Lucy Bopf" < lb...@redhat.com >, "Derek Cadzow" < dcad...@redhat.com 
> > > > > >
> > 
> > > > > Cc: "infra" < infra@ovirt.org >, "Yaniv Dary" < yd...@redhat.com >
> > 
> > > > > Se

Re: Adding a new flag to a specific gerrit project?

2016-06-06 Thread David Caro
On 06/06 09:10, Andrew Dahms wrote:
> Hi David,
> 
> Thank you for following up on this thread.
> 
> I would like to keep my ada...@redhat.com account, for which the account ID
> is the following:
> 
> 1001289
> 
> Let me know if there is anything else you need to merge my accounts.


I have to accounts for you that use the redhat email, the older one (1000582),
also has the gmail ids added, the newer one (1001289) only has the redhat
email, usually the account that's older is the one you would want to keep, are
you sure you want to keep the newer one?

> 
> Kind regards,
> 
> Andrew Dahms
> Documentation Program Manager
> *Phone:* +61 07-3514-7124
> *Email:* ada...@redhat.com
> 
> 
> 
> On Fri, Jun 3, 2016 at 5:45 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 06/03 02:01, Lucy Bopf wrote:
> > > ​Hi Juan, David,
> > >
> > > Thank you for including us on this thread. From my side, I think what
> > Juan has suggested makes sense, and should work for our needs.
> > >
> > > Please also add the following names to the group of users that can set
> > this flag:
> > >
> > > Tahlia Richardson 
> > > Megan Lewis 
> > > Byron Gravenorst 
> > > Julie Wu 
> > >
> > > Just let me know if you need any further information, or if there are
> > any issues.
> >
> >
> > Done, but two of the members have duplicated gerrit accounts:
> >
> > * Byron Gravenorst
> > * Andrew Dahms
> >
> > To fix it, log into gerrit with the account you want to keep, and go to:
> >
> > https://gerrit.ovirt.org/#/settings/
> >
> > There write down the value of the field 'Account ID', and send me an email
> > with
> > that id, I'll merge the duplicated accounts into that one.
> >
> > Btw. This usually happens when using a new login method
> > (oauth/google/github)
> > without having linked it to the existing account, to do so, you must go to:
> >
> > https://gerrit.ovirt.org/#/settings/web-identities
> >
> > And hit there the 'link another identity' button, that will ask you to log
> > in
> > with the new auth method, and will link it to the existing one without
> > creating
> > a new one each time.
> >
> >
> >
> > Cheers!
> >
> > >
> > > Kind Regards,
> > >
> > > Lucy
> > >
> > > Lucy Bopf
> > > Associate Documentation Program Manager
> > > Phone: + 61 07-3514-7181
> > > Email: lb...@redhat.com
> > >
> > > - Original Message -
> > >
> > > > From: "Juan Hernández" <jhern...@redhat.com>
> > > > To: "David Caro" <dc...@redhat.com>, "Andrew Dahms" <ada...@redhat.com
> > >,
> > > > "Lucy Bopf" <lb...@redhat.com>, "Derek Cadzow" <dcad...@redhat.com>
> > > > Cc: "infra" <infra@ovirt.org>, "Yaniv Dary" <yd...@redhat.com>
> > > > Sent: Thursday, June 2, 2016 7:11:35 PM
> > > > Subject: Re: Adding a new flag to a specific gerrit project?
> > >
> > > > Thanks David. I am answering your questions below. Derek, Lucy,
> > > > Andrew, please review and confirm if you agree.
> > >
> > > > On 06/02/2016 09:35 AM, David Caro wrote:
> > > > > On 06/01 16:54, Juan Hernández wrote:
> > > > >> Hello,
> > > > >>
> > > > >> As part of the effort to improve the documentation of the REST
> > > > >> API the documentation team will help us review the patches for
> > > > >> the specification. I was wondering if it is possible to add a new
> > > > >> flag, like the CR, CI, and V that we currently have. Would be
> > > > >> nice to have a "Documentation" flag (or similar) so that the
> > > > >> documentation team can +1 that flag. Is this possible? Can this
> > > > >> flag be added to the ovirt-engine-api-model project?
> > > > >
> > > > > Yes, it's possible, I'll need more details about it though like: *
> > > > > max and min values?
> > >
> > > > The min should be -1, and the max +1. The associated descriptions
> > > > could be the following:
> > >
> > > > -1: Documentation doesn't look good
> > > > 0: Documentation hasn't been reviewed
> > > > +1: Documentation looks good
> > >
> > > > > * Should a +

Re: One patch - two failures

2016-06-06 Thread David Caro
On 06/06 00:37, Yevgeny Zaspitsky wrote:
> Below are the details of one my patches that failed in CI.
> How surprising the fact that it failed with two different reasons and both
> aren't patch related!
> -- Forwarded message --
> From: "Jenkins CI" <gerr...@gerrit.ovirt.org>
> Date: Jun 5, 2016 11:37 PM
> Subject: Change in ovirt-engine[master]: core: update vdsm-jsonrpc-java to
> verion 1.2.3
> To: "Moti Asayag" <masa...@redhat.com>, "Yevgeny Zaspitsky" <
> yzasp...@redhat.com>
> Cc:
> 
> Jenkins CI has posted comments on this change.
> 
> Change subject: core: update vdsm-jsonrpc-java to verion 1.2.3
> ..
> 
> 
> Patch Set 2:
> 
> Build Failed
> 
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-el7-x86_64/534/
> : FAILURE
> 
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/547/
> : FAILURE

This one looks like a real issue:



2016-06-05 20:29:27 DEBUG 
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 
execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 
'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', 
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20160605202840-w8dvo1.log', 
'-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0580_add_default_instance_types.sql:131:
 ERROR:  duplicate key value violates unique constraint "pk_permissions_id"
DETAIL:  Key (id)=(0004-0004-0004-0004-0355) already exists.
CONTEXT:  SQL statement "INSERT INTO permissions(id,
 role_id,
 ad_element_id,
 object_id,
 object_type_id)
 SELECT uuid_generate_v1(),
 'DEF9----DEF9', -- UserTemplateBasedVm
 'EEE0----**FILTERED**789EEE', -- Everyone
 v_instance_type_id,
 4 -- template"
PL/pgSQL function do_insert_instance_type(character varying,character 
varying,integer,integer,integer) line 93 at SQL statement
SQL statement "SELECT do_insert_instance_type('Tiny', 'Tiny instance type', 
512, 1, 1)"
PL/pgSQL function insert_default_instance_types() line 5 at PERFORM
FATAL: Cannot execute sql command: 
--file=/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0580_add_default_instance_types.sql


Does not seem related to your patch though.

> 
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-merged-fc23-x86_64/534/
> : SUCCESS
> 
> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-4.0_el7_merged/78/
> : SUCCESS
> 
> --
> To view, visit https://gerrit.ovirt.org/58584
> To unsubscribe, visit https://gerrit.ovirt.org/settings
> 
> Gerrit-MessageType: comment
> Gerrit-Change-Id: Iae15520c831c0b8de36169f0287057684033d376
> Gerrit-PatchSet: 2
> Gerrit-Project: ovirt-engine
> Gerrit-Branch: master
> Gerrit-Owner: Yevgeny Zaspitsky <yzasp...@redhat.com>
> Gerrit-Reviewer: Alona Kaplan <alkap...@redhat.com>
> Gerrit-Reviewer: Jenkins CI
> Gerrit-Reviewer: Moti Asayag <masa...@redhat.com>
> Gerrit-Reviewer: Piotr Kliczewski <piotr.kliczew...@gmail.com>
> Gerrit-Reviewer: Yevgeny Zaspitsky <yzasp...@redhat.com>
> Gerrit-Reviewer: gerrit-hooks <automat...@ovirt.org>
> Gerrit-HasComments: No

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Adding a new flag to a specific gerrit project?

2016-06-03 Thread David Caro
On 06/03 02:01, Lucy Bopf wrote:
> ​Hi Juan, David, 
> 
> Thank you for including us on this thread. From my side, I think what Juan 
> has suggested makes sense, and should work for our needs. 
> 
> Please also add the following names to the group of users that can set this 
> flag: 
> 
> Tahlia Richardson  
> Megan Lewis  
> Byron Gravenorst  
> Julie Wu  
> 
> Just let me know if you need any further information, or if there are any 
> issues. 


Done, but two of the members have duplicated gerrit accounts:

* Byron Gravenorst
* Andrew Dahms

To fix it, log into gerrit with the account you want to keep, and go to:

https://gerrit.ovirt.org/#/settings/

There write down the value of the field 'Account ID', and send me an email with
that id, I'll merge the duplicated accounts into that one.

Btw. This usually happens when using a new login method (oauth/google/github)
without having linked it to the existing account, to do so, you must go to:

https://gerrit.ovirt.org/#/settings/web-identities

And hit there the 'link another identity' button, that will ask you to log in
with the new auth method, and will link it to the existing one without creating
a new one each time.



Cheers!

> 
> Kind Regards, 
> 
> Lucy 
> 
> Lucy Bopf 
> Associate Documentation Program Manager 
> Phone: + 61 07-3514-7181 
> Email: lb...@redhat.com 
> 
> - Original Message -
> 
> > From: "Juan Hernández" <jhern...@redhat.com>
> > To: "David Caro" <dc...@redhat.com>, "Andrew Dahms" <ada...@redhat.com>,
> > "Lucy Bopf" <lb...@redhat.com>, "Derek Cadzow" <dcad...@redhat.com>
> > Cc: "infra" <infra@ovirt.org>, "Yaniv Dary" <yd...@redhat.com>
> > Sent: Thursday, June 2, 2016 7:11:35 PM
> > Subject: Re: Adding a new flag to a specific gerrit project?
> 
> > Thanks David. I am answering your questions below. Derek, Lucy,
> > Andrew, please review and confirm if you agree.
> 
> > On 06/02/2016 09:35 AM, David Caro wrote:
> > > On 06/01 16:54, Juan Hernández wrote:
> > >> Hello,
> > >>
> > >> As part of the effort to improve the documentation of the REST
> > >> API the documentation team will help us review the patches for
> > >> the specification. I was wondering if it is possible to add a new
> > >> flag, like the CR, CI, and V that we currently have. Would be
> > >> nice to have a "Documentation" flag (or similar) so that the
> > >> documentation team can +1 that flag. Is this possible? Can this
> > >> flag be added to the ovirt-engine-api-model project?
> > >
> > > Yes, it's possible, I'll need more details about it though like: *
> > > max and min values?
> 
> > The min should be -1, and the max +1. The associated descriptions
> > could be the following:
> 
> > -1: Documentation doesn't look good
> > 0: Documentation hasn't been reviewed
> > +1: Documentation looks good
> 
> > > * Should a +1/+2 be required to submit?
> 
> > Yes, +1 should be required to submit.
> 
> > > * Should -1/-2 block submits?
> 
> > Yes, -1 should block submits.
> 
> > >
> > >> Can we define a group of users that have permissions to set this
> > >> flag?
> > >
> > > Yep, for that a list of the gerrit usernames will be required. Also
> > > think if you want to give any other group (like maintainers) the
> > > ability to set it too, as depending on the policy you select in the
> > > previous question it might be required to merge.
> 
> > Yes, the maintainers group should have permission to set this flag.
> 
> > To start with, the names of the users of that group should be the
> > following:
> 
> > Lucy Bopf 
> > Andrew Dahms 
> 
> > Derek, Lucy, Andrew, any other names to add to this group?
> 
> > David, please wait for comments from Derek, Lucy and Andrew before
> > making this changes.
> 
> > --
> > Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> > 3ºD, 28016 Madrid, Spain
> > Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-4.0-snapshot repo is missing ovirt-release40-snapshot.rpm

2016-06-02 Thread David Caro
On 06/02 17:42, Sandro Bonazzola wrote:
> On Thu, Jun 2, 2016 at 5:40 PM, Eyal Edri <ee...@redhat.com> wrote:
> 
> > Are those rpms generated automatically via a job or manually?
> >
> 
> automatically by a jenkins job:
> http://jenkins.ovirt.org/job/ovirt-release_4.0_build-artifacts-el7-x86_64/26/
> and published by a jenkins publisher:
> http://jenkins.ovirt.org/view/Publishers/job/ovirt_4.0_publish-rpms_nightly/

Neat :), with standard ci and all

> 
> 
> 
> 
> > Can we have a jenkins job that can generate them if not?
> >
> 
> 
> 
> 
> >
> > e.
> >
> > On Thu, Jun 2, 2016 at 12:42 PM, Fabian Deutsch <fdeut...@redhat.com>
> > wrote:
> >
> >> Hey,
> >>
> >> please add the ovirt-release40-snapshot.rpm to th e4.0-snapshot repo:
> >> http://plain.resources.ovirt.org/pub/ovirt-4.0-snapshot/rpm/el7/noarch/
> >>
> >> The missing file is currently blocking the 4.0-snapshot builds:
> >>
> >> http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/job/ovirt-node-ng_ovirt-4.0-snapshot_build-artifacts-fc22-x86_64/
> >>
> >> - fabian
> >>
> >> --
> >> Fabian Deutsch <fdeut...@redhat.com>
> >> RHEV Hypervisor
> >> Red Hat
> >> ___
> >> Infra mailing list
> >> Infra@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/infra
> >>
> >>
> >>
> >
> >
> > --
> > Eyal Edri
> > Associate Manager
> > RHEV DevOps
> > EMEA ENG Virtualization R
> > Red Hat Israel
> >
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >
> 
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Upgraded all the fc23 vms

2016-06-02 Thread David Caro

Hi, just fyi I've upgraded all the fc23 vms in order to make sure they have the
latest libvirt installed (I kept the puppet 3 though). It's required in order
to get rid of the 'monitor socket not showing up' error that started happening
more often lately with the storage issues.


If you see any strange issues happening with them might be related, though it
should be safe as most of the jobs use mock.

Cheers!


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: PHX updates - hypervisors in Jenkins datacenter

2016-06-02 Thread David Caro
On 06/02 05:40, Evgheni Dereveanchin wrote:
> Hi all,
> 
> As part of the upgrade process to 3.6 I will be rebuilding
> all hypervisors to CentOS7, and I'm starting with the Jenkins
> Datacenter. For this, I am evacuating slave VMs by shutting
> them down and restarting in the new cluster with CentOS hosts.
> 
> I'm currently working on ovirt-srv04 so please don't schedule
> any VMs on that host until it's rebuilt.
> 
> I found several VMs running on that host which are not listed
> on Jenkins so I can't mark them offline there for the update period:
> 
> fc21-vm01.phx.ovirt.org
> el6-vm06.phx.ovirt.org
> 
> If anyone knows what these VMs are used for - please tell me,
> otherwise I will just restart them as well.
> 
> Also this VM is present on this hypervisor:
> artifactory.phx.ovirt.org
> 
> Is it used by the slaves? Can it be shut down for a few minutes?
> I'm not sure I will be able to live migrate it even as the F20
> hosts we have in the old cluster have issues with live migration
> even to other F20 hosts.


This vm is quite critical actually, I'd move it to the prod cluster.
It's used by all the engine jobs, to pull the jar dependencies from, and also
to publish some ovirt jars.

The jobs should fallback to the official maven repos if it's not available, but
some might not use that fallback config. I'd say it's ok to take it out for a
few minutes, just send an email to the devel list so they are not surprised if
any jobs fail during that time.

> 
> Regards, 
> Evgheni Dereveanchin 
> 
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Adding a new flag to a specific gerrit project?

2016-06-02 Thread David Caro
On 06/01 16:54, Juan Hernández wrote:
> Hello,
> 
> As part of the effort to improve the documentation of the REST API the
> documentation team will help us review the patches for the
> specification. I was wondering if it is possible to add a new flag, like
> the CR, CI, and V that we currently have. Would be nice to have a
> "Documentation" flag (or similar) so that the documentation team can +1
> that flag. Is this possible? Can this flag be added to the
> ovirt-engine-api-model project?

Yes, it's possible, I'll need more details about it though like:
* max and min values?
* Should a +1/+2 be required to submit?
* Should -1/-2 block submits?

> Can we define a group of users that have
> permissions to set this flag?

Yep, for that a list of the gerrit usernames will be required. Also think if
you want to give any other group (like maintainers) the ability to set it too,
as depending on the policy you select in the previous question it might be
required to merge.

> 
> Thanks in advance,
> Juan Hernández
> 
> -- 
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> 3ºD, 28016 Madrid, Spain
> Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Debugging stuck vdsm jobs

2016-05-30 Thread David Caro Estevez
On 05/29 02:24, Nir Soffer wrote:
> On Sun, May 29, 2016 at 2:10 AM, Nir Soffer <nsof...@redhat.com> wrote:
> > It look like this when tests times out:
> >
> > 23:04:44 miscTests.EventTests
> > 23:04:44 testEmitOK
> > 23:04:44 testEmitCallbackException   OK
> > 23:04:49 testEmitStale   OK
> > 23:04:49 testInstanceMethod  OK
> > 23:04:50 testInstanceMethodDead  OK
> > 23:04:55 testOneShot
> > 23:04:55 
> > 
> > 23:04:55 =   Timeout completing tests - extracting stacktrace
> >  =
> > 23:04:55 
> > 
> > 23:04:55
> > 23:04:55 attach: No such file or directory.
> > 23:04:55 [New LWP 7887]
> > 23:04:55 [New LWP 7880]
> > 23:04:55 [New LWP 7873]
> > 23:04:55 [New LWP 7866]
> > 23:04:55 [New LWP 7859]
> > 23:04:55 [New LWP 7852]
> > 23:04:55 [New LWP 7845]
> > 23:04:55 [Thread debugging using libthread_db enabled]
> > 23:04:55 Using host libthread_db library "/lib64/libthread_db.so.1".
> > 23:04:56 0x7f17f0a1fa82 in pthread_cond_timedwait@@GLIBC_2.3.2 ()
> > from /lib64/libpthread.so.0
> > 23:04:56
> > 23:04:56 Thread 8 (Thread 0x7f17df860700 (LWP 7845)):
> > 23:04:56 Undefined command: "py-bt".  Try "help".
> > 23:04:56 OK
> > 23:04:56 testUnregister
> > 23:04:56 
> > 
> > 23:04:56 =Aborting tests
> >  =
> > 23:04:56 
> > 
> > 23:04:56 ../tests/run_tests_local.sh: line 35:  7743 Killed
> >   "$PYTHON_EXE" ../tests/testrunner.py --local-modules $@
> >
> >
> >
> > On Sun, May 29, 2016 at 2:07 AM, Nir Soffer <nsof...@redhat.com> wrote:
> >> On Thu, May 26, 2016 at 11:08 PM, Nir Soffer <nsof...@redhat.com> wrote:
> >>> Hi all,
> >>>
> >>> We had 2 issues causing vdsm check-patch and check-merge jobs to get 
> >>> stuck.
> >>>
> >>> I fixed the one that caused most trouble:
> >>> https://gerrit.ovirt.org/57993
> >>>
> >>> The other issue may be related to ioprocess, I fixed a related issue:
> >>> https://gerrit.ovirt.org/57473
> >>>
> >>> But I have seen stuck jobs after this change, so the issue may not
> >>> be fixed yet.
> >>>
> >>> If you see a stuck vdsm job - job that run more than 15 minutes, please
> >>> get me a backtrace:
> >>>
> >>> 1. locate the test_runner process pid:
> >>>
> >>> $ ps aux | grep testrunner.py | grep -v grep
> >>> nsoffer  26297 82.6  0.9 389592 44 pts/3   R+   22:52   0:02
> >>> /usr/bin/python ../tests/testrunner.py ...
> >>>
> >>> 2. save a backtrace:
> >>>
> >>> gdb attach 26297 --batch -ex "thread apply all py-bt" > py-bt.out
> >>
> >> This requires the python-debuginfo package, typically installed using:
> >>
> >> dnf debuginfo-install python
> >>
> >> I sent this patch, detecting stuck vdsm tests, printing a backtrace, and 
> >> killing
> >> the stuck process:
> >> https://gerrit.ovirt.org/58212
> >>
> >> It works, but we don't get a backtrace, since python-debuginfo is not 
> >> installed
> >> although I require it - probably we need to add the fedora-debug repository
> >> to check-patch.repos. I tried to use the urls from 
> >> /etc/yum.repos.d/fedora.repo,
> >> but none of them work.
> >>
> >> I will need help from infra to get it working.
> 
> I sent also this patch, that should fix the issue on jenkins, but I
> cannot test it on jenkins:
> https://gerrit.ovirt.org/58213

Instead of forcing adding the repo for all the projects, you should use the
*repos files that vdsm has in the automation directory to add there any extra
repos that you want when running/installing

> 
> Nir

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Foreman support for puppet 4

2016-05-27 Thread David Caro

It's getting closer!

http://projects.theforeman.org/issues/8447#change-65691


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Appliance job build failure because of ovirt-3.6-epel

2016-05-27 Thread David Caro
On 05/26 19:59, Fabian Deutsch wrote:
> Hey,
> 
> the 3.6 job completes, but without an engine:
> 
> http://jenkins.ovirt.org/user/fabiand/my-views/view/appliance/job/ovirt-appliance_ovirt-3.6_build-artifacts-el7-x86_64/lastSuccessfulBuild/artifact/exported-artifacts/anaconda.log/*view*/
> 
> The problenm should also be present in any other following job until
> it's fixed :)

Just fyi, the ovirt_3.6_image-ng-system-tests is one of those jobs ;)

> 
> 16:01:06,691 INFO program: + yum install -y ovirt-engine
> 16:01:06,691 INFO program: Loaded plugins: fastestmirror
> 16:01:06,692 INFO program:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/55d4bcbc6bcd8727167925d216c94c7f5217b921d892da747b84d079c5905a7b-updateinfo.xml.bz2:
> [Errno 14] HTTP Error 404 - Not Found
> 16:01:06,693 INFO program: Trying other mirror.
> 16:01:06,694 INFO program: To address this issue please refer to the
> below knowledge base article
> 16:01:06,694 INFO program:
> 16:01:06,695 INFO program: https://access.redhat.com/articles/1320623
> 16:01:06,695 INFO program:
> 16:01:06,695 INFO program: If above article doesn't help to resolve
> this issue please create a bug on https://bugs.centos.org/
> 16:01:06,696 INFO program:
> 16:01:06,697 INFO program:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/3abc3e70be643a17bb37e3f3e1dd057d8c6242c579412fc50de180b9882e0a99-primary.sqlite.xz:
> [Errno 14] HTTP Error 404 - Not Found
> 16:01:06,699 INFO program: Trying other mirror.
> 16:01:06,699 INFO program: Determining fastest mirrors
> 16:01:06,700 INFO program: * base: centos-distro.cavecreek.net
> 16:01:06,701 INFO program: * extras: centos.host-engine.com
> 16:01:06,701 INFO program: * updates: mirror.n5tech.com
> 16:01:06,702 INFO program:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/3abc3e70be643a17bb37e3f3e1dd057d8c6242c579412fc50de180b9882e0a99-primary.sqlite.xz:
> [Errno 14] HTTP Error 404 - Not Found
> 16:01:06,702 INFO program: Trying other mirror.
> 16:01:06,702 INFO program:
> http://download.fedoraproject.org/pub/epel/7/x86_64/repodata/3abc3e70be643a17bb37e3f3e1dd057d8c6242c579412fc50de180b9882e0a99-primary.sqlite.xz:
> [Errno 14] HTTP Error 404 - Not Found
> 16:01:06,703 INFO program: Trying other mirror.
> 16:01:06,703 INFO program:
> 16:01:06,703 INFO program:
> 16:01:06,704 INFO program: One of the configured repositories failed
> (Extra Packages for Enterprise Linux 7 - x86_64),
> 16:01:06,705 INFO program: and yum doesn't have enough cached data to
> continue. At this point the only
> 16:01:06,706 INFO program: safe thing yum can do is fail. There are a
> few ways to work "fix" this:
> 16:01:06,706 INFO program:
> 16:01:06,706 INFO program: 1. Contact the upstream for the repository
> and get them to fix the problem.
> 16:01:06,707 INFO program:
> 16:01:06,707 INFO program: 2. Reconfigure the baseurl/etc. for the
> repository, to point to a working
> 16:01:06,707 INFO program: upstream. This is most often useful if you
> are using a newer
> 16:01:06,708 INFO program: distribution release than is supported by
> the repository (and the
> 16:01:06,708 INFO program: packages for the previous distribution
> release still work).
> 16:01:06,708 INFO program:
> 16:01:06,708 INFO program: 3. Disable the repository, so yum won't use
> it by default. Yum will then
> 16:01:06,709 INFO program: just ignore the repository until you
> permanently enable it again or use
> 16:01:06,709 INFO program: --enablerepo for temporary usage:
> 16:01:06,709 INFO program:
> 16:01:06,710 INFO program: yum-config-manager --disable ovirt-3.6-epel
> 
> 
> - fabian
> 
> -- 
> Fabian Deutsch <fdeut...@redhat.com>
> RHEV Hypervisor
> Red Hat
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Maintenance on the Mailing-Lists

2016-05-26 Thread David Caro
On 05/26 23:57, Marc Dequènes (Duck) wrote:
> Quack,
> 
> Changes done. Do you copy?
> 

Yep, copy (and good signature too)



> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-26 Thread David Caro
On 05/26 10:20, Barak Korren wrote:
> >
> >
> > I agree a stable distributed storage solution is the way to go if we can
> > find one :)
> >
> 
> Distributed storages usually suffer from a large overhead because:
> 1. They try to be resilient to node failure, which means keeping two
> or more copies of the same file, which results in I/O overhead.
> 2. They need to coordinate metadata access for large amounts of files.
> Bottlenecks in the metadata management system are a common issue for
> distributes FS storages.
> 
> Since most of our data is ephemeral anyway I don't think we need to
> pay this overhead.

The solution for our current temporary ephemeral data would be for each node
to create the vms locally, that's the scratch disks solution we started with.

The distributed storage would be used to store the jenkins machines templates,
that mostly would be read by the hosts, and thus, properly cached locally with
a low miss rate (as they don't usually change). To actually not use at all the
central storage, whose extra levels of redundancy are only useful for more
critical data (aka production datacenter machines).

> 
> 
> -- 
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-25 Thread David Caro
On 05/25 17:06, David Caro wrote:
> On 05/25 16:09, Barak Korren wrote:
> > On 25 May 2016 at 14:52, David Caro <dc...@redhat.com> wrote:
> > > On 05/25 14:42, Barak Korren wrote:
> > >> On 25 May 2016 at 12:44, Eyal Edri <ee...@redhat.com> wrote:
> > >> > OK,
> > >> > I suggest to test using a VM with local disk (preferably on a host 
> > >> > with SSD
> > >> > configured), if its working,
> > >> > lets expedite moving all VMs or at least a large amount of VMs to it 
> > >> > until
> > >> > we see network load reduced.
> > >> >
> > >>
> > >> This is not that easy, oVirt doesn't support mixing local disk and
> > >> storage in the same cluster, so we will need to move hosts to a new
> > >> cluster for this.
> > >> Also we will lose the ability to use templates, or otherwise have to
> > >> create the templates on each and every disk.
> > >>
> > >> The scratch disk is a good solution for this, where you can have the
> > >> OS image on the central storage and the ephemeral data on the local
> > >> disk.
> > >>
> > >> WRT to the storage architecture - a single huge (10.9T) ext4 is used
> > >> as the FS on top of the DRBD, this is probably not the most efficient
> > >> thing one can do (XFS would probably have been better, RAW via iSCSI -
> > >> even better).
> > >
> > > That was done >3 years ago, xfs was not quite stable and widely used and
> > > supported back then.
> > >
> > AFAIK it pre-dates EXT4
> 
> It does, but for el6, it was performing way poorly, and with more bugs (for
> what the reviews of it said at the time).
> 
> > in any case this does not detract from the
> > fact that the current configuration in not as efficient as we can make
> > it.
> > 
> 
> It does not, I agree to better focus on what we can do now on, now what should
> have been done then.
> 
> > 
> > >>
> > >> I'm guessing that those 10/9TB are not made from a single disk but
> > >> with a hardware RAID of some sort. In this case deactivating the
> > >> hardware RAID and re-exposing it as multiple separate iSCSI LUNs (That
> > >> are then re-joined to a single sotrage domain in oVirt) will enable
> > >> different VMs to concurrently work on different disks. This should
> > >> lower the per-vm storage latency.
> > >
> > > That would get rid of the drbd too, it's a totally different setup, from
> > > scratch (no nfs either).
> > 
> > We can and should still use DRBD, just setup a device for each disk.
> > But yeah, NFS should probably go away.
> > (We are seeing dramatically better performance for iSCSI in
> > integration-engine)
> 
> I don't understand then what you said about splitting the hardware raids, you
> mean to setup one drdb device on top of each hard drive instead? 


Though I really think we should move to gluster/ceph instead for the jenkins
vms, anyone knows what's the current status of the hyperconverge?

That would allow us for better scalable distributed storage, and properly use
the hosts local disks (we have more space on the combined hosts right now that
on the storage servers).

> 
> 
> btw. I think that the nfs is used also for something more than just the engine
> storage domain (just to keep it in mind that it has to be checked if we are
> going to get rid of it)
> 
> > 
> > >
> > >>
> > >> Looking at the storage machine I see strong indication it is IO bound
> > >> - the load average is ~12 while there are just 1-5 working processes
> > >> and the CPU is ~80% idle and the rest is IO wait.
> > >>
> > >> Running 'du *' at:
> > >> /srv/ovirt_storage/jenkins-dc/658e5b87-1207-4226-9fcc-4e5fa02b86b4/images
> > >> one can see that most images are ~40G in size (that is _real_ 40G not
> > >> sparse!). This means that despite having most VMs created based on
> > >> templates, the VMs are full template copies rather then COW clones.
> > >
> > > That should not be like that, maybe the templates are wrongly configured? 
> > > or
> > > foreman images?
> > 
> > This is the expected behaviour when creating a VM from template in the
> > oVirt admin UI. I thought Foreman might behave differently, but it
> > seems it does not.
> > 
> > This behaviour is determined by the parameters you pass to the engine
> > API when inst

Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-25 Thread David Caro
On 05/25 16:09, Barak Korren wrote:
> On 25 May 2016 at 14:52, David Caro <dc...@redhat.com> wrote:
> > On 05/25 14:42, Barak Korren wrote:
> >> On 25 May 2016 at 12:44, Eyal Edri <ee...@redhat.com> wrote:
> >> > OK,
> >> > I suggest to test using a VM with local disk (preferably on a host with 
> >> > SSD
> >> > configured), if its working,
> >> > lets expedite moving all VMs or at least a large amount of VMs to it 
> >> > until
> >> > we see network load reduced.
> >> >
> >>
> >> This is not that easy, oVirt doesn't support mixing local disk and
> >> storage in the same cluster, so we will need to move hosts to a new
> >> cluster for this.
> >> Also we will lose the ability to use templates, or otherwise have to
> >> create the templates on each and every disk.
> >>
> >> The scratch disk is a good solution for this, where you can have the
> >> OS image on the central storage and the ephemeral data on the local
> >> disk.
> >>
> >> WRT to the storage architecture - a single huge (10.9T) ext4 is used
> >> as the FS on top of the DRBD, this is probably not the most efficient
> >> thing one can do (XFS would probably have been better, RAW via iSCSI -
> >> even better).
> >
> > That was done >3 years ago, xfs was not quite stable and widely used and
> > supported back then.
> >
> AFAIK it pre-dates EXT4

It does, but for el6, it was performing way poorly, and with more bugs (for
what the reviews of it said at the time).

> in any case this does not detract from the
> fact that the current configuration in not as efficient as we can make
> it.
> 

It does not, I agree to better focus on what we can do now on, now what should
have been done then.

> 
> >>
> >> I'm guessing that those 10/9TB are not made from a single disk but
> >> with a hardware RAID of some sort. In this case deactivating the
> >> hardware RAID and re-exposing it as multiple separate iSCSI LUNs (That
> >> are then re-joined to a single sotrage domain in oVirt) will enable
> >> different VMs to concurrently work on different disks. This should
> >> lower the per-vm storage latency.
> >
> > That would get rid of the drbd too, it's a totally different setup, from
> > scratch (no nfs either).
> 
> We can and should still use DRBD, just setup a device for each disk.
> But yeah, NFS should probably go away.
> (We are seeing dramatically better performance for iSCSI in
> integration-engine)

I don't understand then what you said about splitting the hardware raids, you
mean to setup one drdb device on top of each hard drive instead? 


btw. I think that the nfs is used also for something more than just the engine
storage domain (just to keep it in mind that it has to be checked if we are
going to get rid of it)

> 
> >
> >>
> >> Looking at the storage machine I see strong indication it is IO bound
> >> - the load average is ~12 while there are just 1-5 working processes
> >> and the CPU is ~80% idle and the rest is IO wait.
> >>
> >> Running 'du *' at:
> >> /srv/ovirt_storage/jenkins-dc/658e5b87-1207-4226-9fcc-4e5fa02b86b4/images
> >> one can see that most images are ~40G in size (that is _real_ 40G not
> >> sparse!). This means that despite having most VMs created based on
> >> templates, the VMs are full template copies rather then COW clones.
> >
> > That should not be like that, maybe the templates are wrongly configured? or
> > foreman images?
> 
> This is the expected behaviour when creating a VM from template in the
> oVirt admin UI. I thought Foreman might behave differently, but it
> seems it does not.
> 
> This behaviour is determined by the parameters you pass to the engine
> API when instantiating a VM, so it most probably doesn't have anything
> to do with the template configuration.

So maybe a misconfiguration in foreman?

> 
> >
> >> What this means is that using pools (where all VMs are COW copies of
> >> the single pool template) is expected to significantly reduce the
> >> storage utilization and therefore the IO load on it (the less you
> >> store, the less you need to read back).
> >
> > That should happen too without pools, with normal qcow templates.
> 
> Not unless you create all the VMs via the API and pass the right
> parameters. Pools are the easiest way to ensure you never mess that
> up...

That was the idea

> 
> > And in any case, that will not lower the normal io, when not actually
> > crea

Re: DAO test and upgrade appear not to run on 4.0 branch

2016-05-25 Thread David Caro
On 05/25 16:03, Tal Nisan wrote:
> On Wed, May 25, 2016 at 4:00 PM, Eyal Edri <ee...@redhat.com> wrote:
> 
> >
> >
> > On Wed, May 25, 2016 at 3:43 PM, Tal Nisan <tni...@redhat.com> wrote:
> >
> >>
> >>
> >> On Wed, May 25, 2016 at 3:37 PM, Eyal Edri <ee...@redhat.com> wrote:
> >>
> >>>
> >>>
> >>> On Wed, May 25, 2016 at 3:33 PM, Tal Nisan <tni...@redhat.com> wrote:
> >>>
> >>>>
> >>>>
> >>>> On Wed, May 25, 2016 at 3:29 PM, Eyal Edri <ee...@redhat.com> wrote:
> >>>>
> >>>>> db upgrade jobs should be covered by engine-setup/upgrade so I don't
> >>>>> see a reason to keep running them.
> >>>>>
> >>>> You mean the checkpatch jobs cover them? Cause upgrade scripts changes
> >>>> need to be tested as well
> >>>>
> >>>
> >>> 2 things IMO cover this:
> >>>   1. the db duplicate script in check-patch.sh
> >>>   2. ovirt-engine setup/upgrade not in standard ci (since we can't run
> >>> setup on mock)
> >>>
> >>> If this is not the case, we need to migrate the db scripts to
> >>> check-patch.
> >>>
> >> Unless I got it all wrong I recall we had an upgrade script check on the
> >> old master, it seems like it's still running:
> >>
> >> http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-master_el7_merged/
> >>
> >>
> >>
> > That is not db upgrade, its the setup job I talked about and I think
> > sandro added it to 4.0 already.
> >
> It didn't run on the patch I sent as an example


It only runs on merges
> 
> >
> >
> >
> >>
> >>>
> >>>>
> >>>>> On Wed, May 25, 2016 at 3:29 PM, Eyal Edri <ee...@redhat.com> wrote:
> >>>>>
> >>>>>> Dao tests are run in the old jenkins still (pending migrate to
> >>>>>> standard ci - we'll appreciate help from DEV migrating it).
> >>>>>> Right now i'm cloning the jobs to run on 4.0 on old-jenkins.
> >>>>>>
> >>>>> What's needed for that migration?
> >>>>
> >>>
> >>> Just to copy the code to a bash script and test it, i started a very
> >>> draft and ugly code here:
> >>>
> >>> https://gerrit.ovirt.org/#/c/55808/
> >>>
> >> Unfortunately we don't have the resources currently to help in that
> >> front, maybe other teams can help?
> >> Why not copy the existing behavior from old Jenkins though?
> >>
> >
> > We can't since they require local changes to the VMs which are not enabled
> > on new slaves in new jenkins.
> > For e.g - dao tests needs postgresql installed and configured - which we
> > can't enable on the new jenkins without breaking other stuff.
> >
> > The best solution will be to migrate this job to standard CI - so we need
> > a dev to sit with CI engineer - its also more complicated since we moved to
> > el7 and new postgres.
> >
> OK, guess that for now I'll either test DAO myself or make a system based
> on trust :)
> 
> >
> >
> >
> >>
> >>>
> >>>
> >>>
> >>>>
> >>>>>>
> >>>>>>
> >>>>>> On Wed, May 25, 2016 at 3:11 PM, Tal Nisan <tni...@redhat.com> wrote:
> >>>>>>
> >>>>>>> Encountered that in this patch:
> >>>>>>> https://gerrit.ovirt.org/#/c/58034/
> >>>>>>>
> >>>>>>> It introduces both an upgrade script change and a change in the DAO
> >>>>>>> layer yet it seems that DAO tests and upgrade test did not run
> >>>>>>>
> >>>>>>>
> >>>>>>> _______
> >>>>>>> Infra mailing list
> >>>>>>> Infra@ovirt.org
> >>>>>>> http://lists.ovirt.org/mailman/listinfo/infra
> >>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Eyal Edri
> >>>>>> Associate Manager
> >>>>>> RHEV DevOps
> >>>>>> EMEA ENG Virtualization R
> >>>>>> Red Hat Israel
> >>>>>>
> >>>>>> phone: +972-9-7692018
> >>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >>>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> Eyal Edri
> >>>>> Associate Manager
> >>>>> RHEV DevOps
> >>>>> EMEA ENG Virtualization R
> >>>>> Red Hat Israel
> >>>>>
> >>>>> phone: +972-9-7692018
> >>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >>>>>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Eyal Edri
> >>> Associate Manager
> >>> RHEV DevOps
> >>> EMEA ENG Virtualization R
> >>> Red Hat Israel
> >>>
> >>> phone: +972-9-7692018
> >>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >>>
> >>
> >>
> >
> >
> > --
> > Eyal Edri
> > Associate Manager
> > RHEV DevOps
> > EMEA ENG Virtualization R
> > Red Hat Israel
> >
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-25 Thread David Caro
On 05/25 14:42, Barak Korren wrote:
> On 25 May 2016 at 12:44, Eyal Edri <ee...@redhat.com> wrote:
> > OK,
> > I suggest to test using a VM with local disk (preferably on a host with SSD
> > configured), if its working,
> > lets expedite moving all VMs or at least a large amount of VMs to it until
> > we see network load reduced.
> >
> 
> This is not that easy, oVirt doesn't support mixing local disk and
> storage in the same cluster, so we will need to move hosts to a new
> cluster for this.
> Also we will lose the ability to use templates, or otherwise have to
> create the templates on each and every disk.
> 
> The scratch disk is a good solution for this, where you can have the
> OS image on the central storage and the ephemeral data on the local
> disk.
> 
> WRT to the storage architecture - a single huge (10.9T) ext4 is used
> as the FS on top of the DRBD, this is probably not the most efficient
> thing one can do (XFS would probably have been better, RAW via iSCSI -
> even better).

That was done >3 years ago, xfs was not quite stable and widely used and
supported back then.

> 
> I'm guessing that those 10/9TB are not made from a single disk but
> with a hardware RAID of some sort. In this case deactivating the
> hardware RAID and re-exposing it as multiple separate iSCSI LUNs (That
> are then re-joined to a single sotrage domain in oVirt) will enable
> different VMs to concurrently work on different disks. This should
> lower the per-vm storage latency.

That would get rid of the drbd too, it's a totally different setup, from
scratch (no nfs either).

> 
> Looking at the storage machine I see strong indication it is IO bound
> - the load average is ~12 while there are just 1-5 working processes
> and the CPU is ~80% idle and the rest is IO wait.
> 
> Running 'du *' at:
> /srv/ovirt_storage/jenkins-dc/658e5b87-1207-4226-9fcc-4e5fa02b86b4/images
> one can see that most images are ~40G in size (that is _real_ 40G not
> sparse!). This means that despite having most VMs created based on
> templates, the VMs are full template copies rather then COW clones.

That should not be like that, maybe the templates are wrongly configured? or
foreman images?

> What this means is that using pools (where all VMs are COW copies of
> the single pool template) is expected to significantly reduce the
> storage utilization and therefore the IO load on it (the less you
> store, the less you need to read back).

That should happen too without pools, with normal qcow templates.
And in any case, that will not lower the normal io, when not actually creating
vms, as any read and write will still hit the disk anyhow, it only alleviates
the io when creating new vms. The local disk (scratch disk) is the best option
imo, now and for the foreseeable future.

> 
> -- 
> Barak Korren
> bkor...@redhat.com
> RHEV-CI Team
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread David Caro
On 05/24 18:03, David Caro wrote:
> On 05/24 17:57, Fabian Deutsch wrote:
> > Hey,
> > 
> > $subj says it all.
> > 
> > Affected jobs are:
> > http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
> > 
> > I.e. 3.6 - before: ~46min, now 1:23hrs
> > 
> > In master it's even worse: >1:30hrs
> > 
> > Can someone help to idnetify the reason?
> 
> 
> I see that this is where there's a big jump in time:
> 
> 06:39:38 Domain installation still in progress. You can reconnect to 
> 06:39:38 the console to complete the installation process.
> 07:21:51 
> .2016-05-24
>  03:21:51,341: Install finished. Or at least virt shut down.
> 
> So it looks as if the code that checks if the domain is shut down is not
> working properly, or maybe the virt-install is taking very very long to work.


It seems that the virt-install log is not being archived, and the workdir has
already been cleaned up so I can check the logfile:

   
/home/jenkins/workspace/ovirt-node-ng_ovirt-3.6_build-artifacts-fc22-x86_64/ovirt-node-ng/virt-install.log


Maybe you can archive it too on the next run to debug

> 
> > 
> > - fabian
> > 
> > -- 
> > Fabian Deutsch <fdeut...@redhat.com>
> > RHEV Hypervisor
> > Red Hat
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ngn build jobs take more than twice (x) as long as in the last days

2016-05-24 Thread David Caro
On 05/24 17:57, Fabian Deutsch wrote:
> Hey,
> 
> $subj says it all.
> 
> Affected jobs are:
> http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/
> 
> I.e. 3.6 - before: ~46min, now 1:23hrs
> 
> In master it's even worse: >1:30hrs
> 
> Can someone help to idnetify the reason?


I see that this is where there's a big jump in time:

06:39:38 Domain installation still in progress. You can reconnect to 
06:39:38 the console to complete the installation process.
07:21:51 
.2016-05-24
 03:21:51,341: Install finished. Or at least virt shut down.

So it looks as if the code that checks if the domain is shut down is not
working properly, or maybe the virt-install is taking very very long to work.

> 
> - fabian
> 
> -- 
> Fabian Deutsch <fdeut...@redhat.com>
> RHEV Hypervisor
> Red Hat
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread David Caro
On 05/24 18:38, Marc Dequènes (Duck) wrote:
> Quack,
> 
> On 05/24/2016 05:32 PM, David Caro wrote:
> > On 05/24 11:27, Eyal Edri wrote:
> >> Misc,David?
> 
> Misc is on PTO
> 
> > I don't know, when was that done?
> > 
> > Maybe it's old enough so Quaid was involved back then? or dnary?
> 
> Added Quaid, please help us.
> Who is dnary? Could not find this nick/mail-prefix/…

Dave Neary, he was community manager for oVirt ~3 years ago

> 
> Here is the original mail with the unsolved question:
> 
> >> On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck) <d...@redhat.com>
> >> wrote:
> >>
> >>> Quack,
> >>>
> >>> I'm having a look at OVIRT-357 and found that IPv6 was disabled for
> >>> Postfix as a workaround.
> >>>
> >>> It seems to me the IPv6 address should be added to the DNS RR (so that
> >>> SPF would allow this address too) and Postfix could have IPv6
> >>> reactivated. I see no other problem with other services on the machine
> >>> if we do so.
> >>>
> >>> Nevertheless, I found out this in the dns-maps:
> >>> ; TASK0043529 - TASK0108580 overwriten
> >>> ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
> >>>
> >>> Which means IPv6 RR were activated and then later disabled. I don't know
> >>> how to have access to these TASKs (SNOW?) but I'd really like to know
> >>> the reason for this before any action.
> >>>
> >>> Do any one know why this DNS RR was removed? or were I could find it?
> >>>
> >>> Regards.
> 



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: IPv6 RR disabled on lists.ovirt.org -- WHY???

2016-05-24 Thread David Caro
On 05/24 11:27, Eyal Edri wrote:
> Misc,David?

I don't know, when was that done?

Maybe it's old enough so Quaid was involved back then? or dnary?

> 
> On Tue, May 24, 2016 at 8:31 AM, Marc Dequènes (Duck) <d...@redhat.com>
> wrote:
> 
> > Quack,
> >
> > I'm having a look at OVIRT-357 and found that IPv6 was disabled for
> > Postfix as a workaround.
> >
> > It seems to me the IPv6 address should be added to the DNS RR (so that
> > SPF would allow this address too) and Postfix could have IPv6
> > reactivated. I see no other problem with other services on the machine
> > if we do so.
> >
> > Nevertheless, I found out this in the dns-maps:
> > ; TASK0043529 - TASK0108580 overwriten
> > ;linode01IN  2600:3c01::f03c:91ff:fe93:4b0d
> >
> > Which means IPv6 RR were activated and then later disabled. I don't know
> > how to have access to these TASKs (SNOW?) but I'd really like to know
> > the reason for this before any action.
> >
> > Do any one know why this DNS RR was removed? or were I could find it?
> >
> > Regards.
> >
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: vdsm_master_check-patch-el7 job is failing

2016-05-24 Thread David Caro
On 05/24 11:07, Amit Aviram wrote:
> Hi.
> For the last day I am getting this error over and over again from jenkins:
> 
> Start: yum install*07:23:55* ERROR: Command failed. See logs for
> output.*07:23:55*  # /usr/bin/yum-deprecated --installroot
> /var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/
> --releasever 7 install @buildsys-build
> --setopt=tsflags=nocontexts*07:23:55* WARNING: unable to delete
> selinux filesystems (/tmp/mock-selinux-plugin.3tk4zgr4): [Errno 1]
> Operation not permitted: '/tmp/mock-selinux-plugin.3tk4zgr4'*07:23:55*
> Init took 3 seconds
> 
> 
> (see http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/2026/)
> 
> 
> This fails the job, so I get -1 from Jenkins CI for my patch.


That's not what's failing the job, is just a warning, the failure is happening
before that, when installing the chroot:

07:23:53 Start: yum install
07:23:55 ERROR: Command failed. See logs for output.
07:23:55  # /usr/bin/yum-deprecated --installroot 
/var/lib/mock/epel-7-x86_64-cc6e9a99555654260f7f229c124a6940-31053/root/ 
--releasever 7 install @buildsys-build --setopt=tsflags=nocontexts

Checking the logs (logs.tgz file, archived on the job, under
vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log):


DEBUG util.py:417:  
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/repodata/repomd.xml:
 [Errno 14] HTTPS Error 404 - Not Found
DEBUG util.py:417:  Trying other mirror.
DEBUG util.py:417:   One of the configured repositories failed ("Custom 
openstack-kilo"),
DEBUG util.py:417:   and yum doesn't have enough cached data to continue. At 
this point the only
DEBUG util.py:417:   safe thing yum can do is fail. There are a few ways to 
work "fix" this:
DEBUG util.py:417:   1. Contact the upstream for the repository and get 
them to fix the problem.
DEBUG util.py:417:   2. Reconfigure the baseurl/etc. for the repository, to 
point to a working
DEBUG util.py:417:  upstream. This is most often useful if you are 
using a newer
DEBUG util.py:417:  distribution release than is supported by the 
repository (and the
DEBUG util.py:417:  packages for the previous distribution release 
still work).
DEBUG util.py:417:   3. Disable the repository, so yum won't use it by 
default. Yum will then
DEBUG util.py:417:  just ignore the repository until you permanently 
enable it again or use
DEBUG util.py:417:  --enablerepo for temporary usage:
DEBUG util.py:417:  yum-config-manager --disable openstack-kilo
DEBUG util.py:417:   4. Configure the failing repository to be skipped, if 
it is unavailable.
DEBUG util.py:417:  Note that yum will try to contact the repo. when it 
runs most commands,
DEBUG util.py:417:  so will have to try and fail each time (and thus. 
yum will be be much
DEBUG util.py:417:  slower). If it is a very temporary problem though, 
this is often a nice
DEBUG util.py:417:  compromise:
DEBUG util.py:417:  yum-config-manager --save 
--setopt=openstack-kilo.skip_if_unavailable=true
DEBUG util.py:417:  failure: repodata/repomd.xml from openstack-kilo: [Errno 
256] No more mirrors to try.


So it seems that the repo does not exist anymore, there's a README.txt file
though that says:

RDO Kilo is hosted in CentOS Cloud SIG repository
http://mirror.centos.org/centos/7/cloud/x86_64/openstack-kilo/

And that new link seems to work ok, so probably you just need to change the
automation/*.repos files on vdsm git repo to point to the new openstack repos
url instead of the old one and everything should work ok.



> 
> I am pretty sure it is not related to the patch. also fc23 job passes.
> 
> 
> Any idea what's the problem?
> 
> 
> Thanks

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-515) Re: Cannot clone bugs with more than 2^16 characters in the comments

2016-05-23 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez updated OVIRT-515:
-
Resolution: Won't Fix
Status: Done  (was: To Do)

> Re: Cannot clone bugs with more than 2^16 characters in the comments
> 
>
> Key: OVIRT-515
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-515
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: dcaro
>Assignee: infra
> Attachments: signature.asc, signature.asc
>
>
> Looks like a restriction of the api itself, probably triggered by us adding 
> the
> 'comment from ...' header to the comments while clonning.
> Opening a bug on it to keep track
> On 05/02 12:36, Tal Nisan wrote:
> > I've encountered this in this job:
> > http://jenkins-ci.eng.lab.tlv.redhat.com/job/system_bugzilla_clone_zstream_milestone/141/console
> > 
> > For this bug:
> > https://bugzilla.redhat.com/1301083
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> David Caro
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-515) Re: Cannot clone bugs with more than 2^16 characters in the comments

2016-05-23 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16209#comment-16209
 ] 

David Caro Estevez commented on OVIRT-515:
--

>From Tal Nisan (on an email):

I can't comment in Jira since I don't have a user there but I've
investigated a bit and it seems to be a Bugzilla limitation, when I tried
to clone manually it gave me the same error till I cut the comment.
I guess the comment field there is limited to 2^16 which makes sense.
In light of this this issue can be closed.


> Re: Cannot clone bugs with more than 2^16 characters in the comments
> 
>
> Key: OVIRT-515
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-515
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: dcaro
>Assignee: infra
> Attachments: signature.asc, signature.asc
>
>
> Looks like a restriction of the api itself, probably triggered by us adding 
> the
> 'comment from ...' header to the comments while clonning.
> Opening a bug on it to keep track
> On 05/02 12:36, Tal Nisan wrote:
> > I've encountered this in this job:
> > http://jenkins-ci.eng.lab.tlv.redhat.com/job/system_bugzilla_clone_zstream_milestone/141/console
> > 
> > For this bug:
> > https://bugzilla.redhat.com/1301083
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> -- 
> David Caro
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-562) find a way not to fail CI jobs if git clone fails

2016-05-23 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16204#comment-16204
 ] 

David Caro Estevez commented on OVIRT-562:
--

What should be do then on infra issue?

> find a way not to fail CI jobs if git clone fails
> -
>
> Key: OVIRT-562
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-562
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: eyal edri [Administrator]
>Assignee: infra
>
> We need to find away to tell gerrit trigger plugin not to put -1 if git clone 
> fails (obviously an infra issue)



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-562) find a way not to fail CI jobs if git clone fails

2016-05-23 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202#comment-16202
 ] 

David Caro Estevez commented on OVIRT-562:
--

That is ok imo, and only a maintainer should be able to merge such a patch 
(being fully aware that the patch did not run the tests and might introduce 
regressions)

> find a way not to fail CI jobs if git clone fails
> -
>
> Key: OVIRT-562
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-562
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: eyal edri [Administrator]
>Assignee: infra
>
> We need to find away to tell gerrit trigger plugin not to put -1 if git clone 
> fails (obviously an infra issue)



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-562) find a way not to fail CI jobs if git clone fails

2016-05-23 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200#comment-16200
 ] 

David Caro Estevez commented on OVIRT-562:
--

I think that unless it retries again, the -1 should be there to indicate that 
there was an issue and that the job did ran unsuccessfully (so patches are not 
merged without any jobs running, and the developer does not get frustrated 
while waiting for jenkins)

> find a way not to fail CI jobs if git clone fails
> -
>
> Key: OVIRT-562
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-562
> Project: oVirt - virtualization made easy
>  Issue Type: Improvement
>Reporter: eyal edri [Administrator]
>Assignee: infra
>
> We need to find away to tell gerrit trigger plugin not to put -1 if git clone 
> fails (obviously an infra issue)



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Build broken cause of commit aeb98b5a5ac807d23008ed26293244efab103bde, please fix ASAP

2016-05-19 Thread David Caro
On 05/19 16:20, Martin Perina wrote:
> On Thu, May 19, 2016 at 4:15 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/19 16:14, Martin Perina wrote:
> > > Hi,
> > >
> > > so I probably found the issue why this broke the build. I have checked
> > the
> > > CI before merging and it was OK,
> > > see CI+1 on patch set 20.
> > >
> > > Now looking at console outputs of those jobs, the build itself failed,
> > but
> > > build jobs are marked as SUCCESSFULL.
> > > So it seems we have a bug in those jobs!!!
> >
> >
> > We will need a bit more info on which project/jenkins job/build, etc, by
> > the
> > email  I can't extract the project that failed
> >
> 
> ​Sorry, here is the problematic patch: https://gerrit.ovirt.org/#/c/57052/
> 
> The problematic patch is build on patch set 20. It's marked as successfull,
> although engine build failed:
> 
> http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-el7-x86_64/1250/console
> 
> 
> Anyway, we are preparing reverting patch ...


I see, the issue was introduced by me in the mock_runner.sh script, will send a
patch right away

> 
> Sorry for the issue.
> 
> Martin
> ​
> 
> 
> >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, May 19, 2016 at 3:53 PM, Tal Nisan <tni...@redhat.com> wrote:
> > >
> > > > This commit broke the build, missing KernelEnv.
> > > > Normally I'd revert but since it's feature freeze I'm more forgiving :)
> > > > Please either revert or send a fix asap.
> > > >
> > > > Thanks.
> > > >
> > > >
> >
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Build broken cause of commit aeb98b5a5ac807d23008ed26293244efab103bde, please fix ASAP

2016-05-19 Thread David Caro
On 05/19 16:14, Martin Perina wrote:
> Hi,
> 
> so I probably found the issue why this broke the build. I have checked the
> CI before merging and it was OK,
> see CI+1 on patch set 20.
> 
> Now looking at console outputs of those jobs, the build itself failed, but
> build jobs are marked as SUCCESSFULL.
> So it seems we have a bug in those jobs!!!


We will need a bit more info on which project/jenkins job/build, etc, by the
email  I can't extract the project that failed

> 
> 
> 
> 
> 
> On Thu, May 19, 2016 at 3:53 PM, Tal Nisan <tni...@redhat.com> wrote:
> 
> > This commit broke the build, missing KernelEnv.
> > Normally I'd revert but since it's feature freeze I'm more forgiving :)
> > Please either revert or send a fix asap.
> >
> > Thanks.
> >
> >

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Ovirt system tests review recording

2016-05-18 Thread David Caro

Here you have the recording of the ovirt system tests review meeting:

https://bluejeans.com/s/9FaJ/


There will be another one with a hands-on focus on which you'll finish running
the tests on your laptop yourself.

Enjoy!

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Maintenance and reboot of ovirt-srv17

2016-05-17 Thread David Caro
On 05/17 09:59, Nadav Goldin wrote:
> Hi David,
> I also had similar problem last week that some el7 vms didn't have nested
> enabled
> although all configurations were OK. Restarting helped to some and for
> others I had
> to manually reload the module which solved it without rebooting.

I prefer rebooting to make sure that it will not go away when we reboot
(manually reloading the module will not persist on reboot).
If it does not work by itself after reboot, is an issue and should be fixed too
(probably config related).

> 
> 
> On Tue, May 17, 2016 at 12:47 AM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/16 23:39, David Caro wrote:
> > > On 05/16 23:36, David Caro wrote:
> > > > On 05/16 23:07, David Caro wrote:
> > > > >
> > > > > Hey, just everyone is on the same page, I'm putting into maintenance
> > and will
> > > > > reboot ovirt-srv17 to enable nested vms on it (the config is there,
> > but the
> > > > > host was not restarted and thus the params were not applied to the
> > module)
> > > >
> > > > Finished with ovirt-srv17, now with ovirt-srv01
> > >
> > >
> > > Sorry, srv01 has nested enabled... debugging
> >
> > It was not 01, but ovirt-srv14 the host that had issues, and the issue was
> > not
> > that nested was not enabled, but for some reason, the vm did not pick it up
> > (maybe it was migrated from a non-nested host) and had to restart the vm to
> > refresh the hardware capabilities.
> >
> > > >
> > > > >
> > > > >
> > > > > --
> > > > > David Caro
> > > > >
> > > > > Red Hat S.L.
> > > > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > > > >
> > > > > Tel.: +420 532 294 605
> > > > > Email: dc...@redhat.com
> > > > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > > > Web: www.redhat.com
> > > > > RHT Global #: 82-62605
> > > >
> > > >
> > > >
> > > > --
> > > > David Caro
> > > >
> > > > Red Hat S.L.
> > > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > > >
> > > > Tel.: +420 532 294 605
> > > > Email: dc...@redhat.com
> > > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > > Web: www.redhat.com
> > > > RHT Global #: 82-62605
> > >
> > >
> > >
> > > > ___
> > > > Infra mailing list
> > > > Infra@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/infra
> > >
> > >
> > > --
> > > David Caro
> > >
> > > Red Hat S.L.
> > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > >
> > > Tel.: +420 532 294 605
> > > Email: dc...@redhat.com
> > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > Web: www.redhat.com
> > > RHT Global #: 82-62605
> >
> >
> >
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Maintenance and reboot of ovirt-srv17

2016-05-16 Thread David Caro
On 05/16 23:39, David Caro wrote:
> On 05/16 23:36, David Caro wrote:
> > On 05/16 23:07, David Caro wrote:
> > > 
> > > Hey, just everyone is on the same page, I'm putting into maintenance and 
> > > will
> > > reboot ovirt-srv17 to enable nested vms on it (the config is there, but 
> > > the
> > > host was not restarted and thus the params were not applied to the module)
> > 
> > Finished with ovirt-srv17, now with ovirt-srv01
> 
> 
> Sorry, srv01 has nested enabled... debugging

It was not 01, but ovirt-srv14 the host that had issues, and the issue was not
that nested was not enabled, but for some reason, the vm did not pick it up
(maybe it was migrated from a non-nested host) and had to restart the vm to
refresh the hardware capabilities.

> > 
> > > 
> > > 
> > > -- 
> > > David Caro
> > > 
> > > Red Hat S.L.
> > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > > 
> > > Tel.: +420 532 294 605
> > > Email: dc...@redhat.com
> > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > Web: www.redhat.com
> > > RHT Global #: 82-62605
> > 
> > 
> > 
> > -- 
> > David Caro
> > 
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> > 
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> 
> 
> 
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> 
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Maintenance and reboot of ovirt-srv17

2016-05-16 Thread David Caro
On 05/16 23:36, David Caro wrote:
> On 05/16 23:07, David Caro wrote:
> > 
> > Hey, just everyone is on the same page, I'm putting into maintenance and 
> > will
> > reboot ovirt-srv17 to enable nested vms on it (the config is there, but the
> > host was not restarted and thus the params were not applied to the module)
> 
> Finished with ovirt-srv17, now with ovirt-srv01


Sorry, srv01 has nested enabled... debugging
> 
> > 
> > 
> > -- 
> > David Caro
> > 
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> > 
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> 
> 
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Maintenance and reboot of ovirt-srv17

2016-05-16 Thread David Caro
On 05/16 23:07, David Caro wrote:
> 
> Hey, just everyone is on the same page, I'm putting into maintenance and will
> reboot ovirt-srv17 to enable nested vms on it (the config is there, but the
> host was not restarted and thus the params were not applied to the module)

Finished with ovirt-srv17, now with ovirt-srv01

> 
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Another job failure: "Command failed. See logs for output"

2016-05-16 Thread David Caro
chroot.
> > Trying to umount."
> > 19:31:03 fi
> > 19:31:03 for mount in "${mounts[@]}"; do
> > 19:31:03 sudo umount "$mount" \
> > 19:31:03 || {
> > 19:31:03 echo "ERROR:  Failed to umount $mount."
> > 19:31:03 failed=true
> > 19:31:03 }
> > 19:31:03 done
> > 19:31:03 done
> > 19:31:03
> > 19:31:03 # Clean any leftover chroot from other jobs
> > 19:31:03 for mock_root in /var/lib/mock/*; do
> > 19:31:03 this_chroot_failed=false
> > 19:31:03 mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
> > 19:31:03 if [[ "$mounts" ]]; then
> > 19:31:03 echo "Found mounted dirs inside the chroot $mock_root." \
> > 19:31:03  "Trying to umount."
> > 19:31:03 fi
> > 19:31:03 for mount in "${mounts[@]}"; do
> > 19:31:03 sudo umount "$mount" \
> > 19:31:03 || {
> > 19:31:03 echo "ERROR:  Failed to umount $mount."
> > 19:31:03 failed=true
> > 19:31:03 this_chroot_failed=true
> > 19:31:03 }
> > 19:31:03 done
> > 19:31:03 if ! $this_chroot_failed; then
> > 19:31:03 sudo rm -rf "$mock_root"
> > 19:31:03 fi
> > 19:31:03 done
> > 19:31:03
> > 19:31:03 if $failed; then
> > 19:31:03 echo "Aborting."
> > 19:31:03 exit 1
> > 19:31:03 fi
> > 19:31:03
> > 19:31:03 # remove mock system cache, we will setup proxies to do the
> > caching and this
> > 19:31:03 # takes lots of space between runs
> > 19:31:03 shopt -u nullglob
> > 19:31:03 sudo rm -Rf /var/cache/mock/*
> > 19:31:03
> > 19:31:03 # restore the permissions in the working dir, as sometimes it
> > leaves files
> > 19:31:03 # owned by root and then the 'cleanup workspace' from jenkins
> > job fails to
> > 19:31:03 # clean and breaks the jobs
> > 19:31:03 sudo chown -R "$USER" "$WORKSPACE"
> > 19:31:03
> > 19:31:03 [vdsm_master_check-patch-el7-x86_64] $ /bin/bash -xe
> > /tmp/hudson3681642653853563663.sh
> > 19:31:03 + echo shell-scripts/mock_cleanup.sh
> > 19:31:03 shell-scripts/mock_cleanup.sh
> > 19:31:03 + shopt -s nullglob
> > 19:31:03 + 
> > WORKSPACE=/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> > 19:31:03 + cat
> > 19:31:03 
> > ___
> > 19:31:03 
> > ###
> > 19:31:03 #  
> >#
> > 19:31:03 #   CLEANUP
> >#
> > 19:31:03 #  
> >#
> > 19:31:03 
> > ###
> > 19:31:03 + logs=(./*log ./*/logs)
> > 19:31:03 + [[ -n ./vdsm/logs ]]
> > 19:31:03 + tar cvzf exported-artifacts/logs.tgz ./vdsm/logs
> > 19:31:03 ./vdsm/logs/
> > 19:31:03 ./vdsm/logs/mocker-epel-7-x86_64.el7.init/
> > 19:31:03 ./vdsm/logs/mocker-epel-7-x86_64.el7.init/root.log
> > 19:31:03 ./vdsm/logs/mocker-epel-7-x86_64.el7.init/state.log
> > 19:31:03 ./vdsm/logs/mocker-epel-7-x86_64.el7.init/stdout_stderr.log
> > 19:31:03 ./vdsm/logs/mocker-epel-7-x86_64.el7.init/build.log
> > 19:31:03 + rm -rf ./vdsm/logs
> > 19:31:03 + failed=false
> > 19:31:03 + mock_confs=("$WORKSPACE"/*/mocker*)
> > 19:31:03 + for mock_conf_file in '"${mock_confs[@]}"'
> > 19:31:03 + [[ -n
> > /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/mocker-epel-7-x86_64.el7.cfg
> > ]]
> > 19:31:03 + echo 'Cleaning up mock '
> > 19:31:03 Cleaning up mock
> > 19:31:03 + mock_root=mocker-epel-7-x86_64.el7.cfg
> > 19:31:03 + mock_root=mocker-epel-7-x86_64.el7
> > 19:31:03 + my_mock=/usr/bin/mock
> > 19:31:03 + my_mock+='
> > --configdir=/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm'
> > 19:31:03 + my_mock+=' --root=mocker-epel-7-x86_64.el7'
> > 19:31:03 + my_mock+='
> > --resultdir=/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64'
> > 19:31:03 + echo 'Killing all mock orphan processes, if any.'
> > 19:31:03 Killing all mock orphan processes, if any.
> > 19:31:03 + /usr/bin/mock
> > --configdir=/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm
> > --root=mocker-epel-7-x86_64.el7
> > --resultdir=/home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> > --orphanskill
> > 19:31:04 WARNING: Could not find required logging config file:
> > /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/logging.ini.
> > Using default...
> > 19:31:04 INFO: mock.py version 1.2.14 starting (python version = 3.4.3)...
> > 19:31:04 Start: init plugins
> > 19:31:04 INFO: selinux enabled
> > 19:31:04 Finish: init plugins
> > 19:31:04 Start: run
> > 19:31:04 Finish: run
> > 19:31:04 ++ grep -Po '(?<=config_opts\['\''root'\''\] = '\'')[^'\'']*'
> > /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm/mocker-epel-7-x86_64.el7.cfg
> > 19:31:04 + mock_root=epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc
> > 19:31:04 + [[ -n epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc ]]
> > 19:31:04 + mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
> > 19:31:04 ++ mount
> > 19:31:04 ++ grep epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc
> > 19:31:04 ++ awk '{print $3}'
> > 19:31:04 + :
> > 19:31:04 + [[ -n '' ]]
> > 19:31:04 + for mock_root in '/var/lib/mock/*'
> > 19:31:04 + this_chroot_failed=false
> > 19:31:04 + mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
> > 19:31:04 ++ mount
> > 19:31:04 ++ grep 
> > /var/lib/mock/epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc
> > 19:31:04 ++ awk '{print $3}'
> > 19:31:04 + :
> > 19:31:04 + [[ -n '' ]]
> > 19:31:04 + false
> > 19:31:04 + sudo rm -rf
> > /var/lib/mock/epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc
> > 19:31:04 + false
> > 19:31:04 + shopt -u nullglob
> > 19:31:04 + sudo rm -Rf
> > /var/cache/mock/epel-7-x86_64-46ef12ce4362729a0f4c411e00edd8fc
> > 19:31:04 + sudo chown -R jenkins
> > /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> > 19:31:04 POST BUILD TASK : SUCCESS
> > 19:31:04 END OF POST BUILD TASK : 0
> > 19:31:04 Archiving artifacts
> > 19:31:04 Build step 'Groovy Postbuild' marked build as failure
> > 19:31:04 Started calculate disk usage of build
> > 19:31:04 Finished Calculation of disk usage of build in 0 seconds
> > 19:31:04 Started calculate disk usage of workspace
> > 19:31:04 Finished Calculation of disk usage of workspace in 0 seconds
> > 19:31:04 Finished: FAILURE
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins failure

2016-05-16 Thread David Caro Estevez
On 05/15 22:59, Nir Soffer wrote:
> Here is another failure, looks same.
> 
> http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-x86_64/1202/console

Adding a little more info:

The failure is a timeout when fetching from gerrit
  19:22:02 ERROR: Timeout after 10 minutes
  19:22:14 ERROR: Error cloning remote repo 'origin'

> 
> 
> 
> 19:12:01 Triggered by Gerrit: https://gerrit.ovirt.org/57428
> 19:12:01 Building remotely on fc21-vm15.phx.ovirt.org (phx fc21) in
> workspace /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> 19:12:02 Cloning the remote Git repository
> 19:12:02 Cloning repository git://gerrit.ovirt.org/vdsm.git
> 19:12:02  > git init
> /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64/vdsm #
> timeout=10
> 19:12:02 Fetching upstream changes from git://gerrit.ovirt.org/vdsm.git
> 19:12:02  > git --version # timeout=10
> 19:12:02  > git -c core.askpass=true fetch --tags --progress
> git://gerrit.ovirt.org/vdsm.git +refs/heads/*:refs/remotes/origin/*
> 19:22:02 ERROR: Timeout after 10 minutes
> 19:22:14 ERROR: Error cloning remote repo 'origin'
> 19:22:14 hudson.plugins.git.GitException: Command "git -c
> core.askpass=true fetch --tags --progress
> git://gerrit.ovirt.org/vdsm.git +refs/heads/*:refs/remotes/origin/*"
> returned status code 143:
> 19:22:14 stdout:
> 19:22:14 stderr: remote: Counting objects: 49121, done.
> 19:22:14 remote: Compressing objects:   0% (1/10990)
> remote: Compressing objects:   1% (110/10990)
> remote: Compressing objects:   2% (220/10990)
> remote: Compressing objects:   3% (330/10990)
> remote: Compressing objects:   4% (440/10990)
> remote: Compressing objects:   5% (550/10990)
> remote: Compressing objects:   6% (660/10990)
> remote: Compressing objects:   7% (770/10990)
> remote: Compressing objects:   8% (880/10990)
> remote: Compressing objects:   9% (990/10990)
> remote: Compressing objects:  10% (1099/10990)
> remote: Compressing objects:  11% (1209/10990)
> remote: Compressing objects:  12% (1319/10990)
> remote: Compressing objects:  13% (1429/10990)
> remote: Compressing objects:  14% (1539/10990)
> remote: Compressing objects:  15% (1649/10990)
> remote: Compressing objects:  16% (1759/10990)
> remote: Compressing objects:  17% (1869/10990)
> remote: Compressing objects:  18% (1979/10990)
> remote: Compressing objects:  19% (2089/10990)
> remote: Compressing objects:  20% (2198/10990)
> remote: Compressing objects:  21% (2308/10990)
> remote: Compressing objects:  22% (2418/10990)
> remote: Compressing objects:  23% (2528/10990)
> remote: Compressing objects:  24% (2638/10990)
> remote: Compressing objects:  25% (2748/10990)
> remote: Compressing objects:  26% (2858/10990)
> remote: Compressing objects:  27% (2968/10990)
> remote: Compressing objects:  28% (3078/10990)
> remote: Compressing objects:  29% (3188/10990)
> remote: Compressing objects:  30% (3297/10990)
> remote: Compressing objects:  31% (3407/10990)
> remote: Compressing objects:  32% (3517/10990)
> remote: Compressing objects:  33% (3627/10990)
> remote: Compressing objects:  34% (3737/10990)
> remote: Compressing objects:  35% (3847/10990)
> remote: Compressing objects:  36% (3957/10990)
> remote: Compressing objects:  37% (4067/10990)
> remote: Compressing objects:  38% (4177/10990)
> remote: Compressing objects:  39% (4287/10990)
> remote: Compressing objects:  40% (4396/10990)
> remote: Compressing objects:  41% (4506/10990)
> remote: Compressing objects:  42% (4616/10990)
> remote: Compressing objects:  43% (4726/10990)
> remote: Compressing objects:  44% (4836/10990)
> remote: Compressing objects:  45% (4946/10990)
> remote: Compressing objects:  46% (5056/10990)
> remote: Compressing objects:  47% (5166/10990)
> remote: Compressing objects:  48% (5276/10990)
> remote: Compressing objects:  49% (5386/10990)
> remote: Compressing objects:  50% (5495/10990)
> remote: Compressing objects:  51% (5605/10990)
> remote: Compressing objects:  52% (5715/10990)
> remote: Compressing objects:  53% (5825/10990)
> remote: Compressing objects:  54% (5935/10990)
> remote: Compressing objects:  55% (6045/10990)
> remote: Compressing objects:  56% (6155/10990)
> remote: Compressing objects:  57% (6265/10990)
> remote: Compressing objects:  58% (6375/10990)
> remote: Compressing objects:  59% (6485/10990)
> remote: Compressing objects:  60% (6594/10990)
> remote: Compressing objects:  61% (6704/10990)
> remote: Compressing objects:  62% (6814/10990)
> remote: Compressing objects:  63% (6924/10990)
> remote: Compressing objects:  64% (7034/10990)
> remote: Compressing objects:  65% (7144/10990)
> remote: Compressing objects:  66% (7254/10990)
> remote: Compressing objects:  67% (7364/10990)
> remote: Compressing objects:  68% (7474/10990)
> remote: Compressing objects:  69% (7584/10990)
> remote: Compressing objects:  70% (7693/10990)
> remote: Compressing objects:  71% (7803/10990)
> remote: Compressing objects:  72% (7913/10990)
> remote: Compressing objects:  73% 

Re: capacity for more slaves

2016-05-13 Thread David Caro Estevez
One possibility that will give us a lot of ips is using internal ones, though 
it will require a bit more configuration but now that the jenkins master is in 
phx I think it might be a good option

David Caro

El 13 may. 2016 7:57, Eyal Edri <ee...@redhat.com> escribió:

Please open a ticket in jira on extending the ip range,  i guess we need to 
handle ir soon 

On May 12, 2016 8:58 PM, "Nadav Goldin" <ngol...@redhat.com> wrote:

sure, added 4 more FC23 slaves, and on the way hit the IPs limit,

had to remove some old unused testing VMs to release IPs.

we have some useless leases there such as an IP for each template,

though I'm not 100% sure if its safe to simply delete them from foreman

(they are saved as templates in the engine, so theoretically shouldn't

be a problem)




On Thu, May 12, 2016 at 4:53 PM, David Caro <dc...@redhat.com> wrote:

On 05/12 16:50, Nadav Goldin wrote:
> ok I'll add also FC23, though I think the problem is the vdsm jobs, not sure
> why they use only fc* instead of the el7 ones

Thanks man, that's because (at least the check-merged) use libvirt to start up
vms, and trying to use libvirt from a fc23 chroot on an el7 host ends up in
error, so they have to run on fc23/21 slaves


>
> On Thu, May 12, 2016 at 4:33 PM, David Caro <dc...@redhat.com> wrote:
>
> > On 05/12 12:46, Nadav Goldin wrote:
> > > Hi, I added few more el7 slaves, there is a (relatively) new template
> > > 'centos72-jenkins-slave' in the Jenkins_CentOS cluster.
> > > iirc there was a discussion few months ago about our IPs limit, I think
> > > we're approaching that(there are
> > > 102 fixed addresses in foreman.phx.ovirt.org dhcpd server, and this is
> > > without the old ovirt-srv* which are not managed there)
> >
> > Actually, we are scarce on fc23 slaves, not el (right now we have 28 idle
> > el7
> > slaves, and I've been waiting for more than 45min for a lago check-patch to
> > start on a fc23 slave)
> >
> > >
> > >
> > > On Tue, May 10, 2016 at 7:04 PM, David Caro <dc...@redhat.com> wrote:
> > >
> > > > On 05/10 18:02, David Caro wrote:
> > > > > On 05/10 18:53, Eyal Edri wrote:
> > > > > > Looking at the load on our hypervisors I'm sure we can add more
> > slaves
> > > > to
> > > > > > jenkins.
> > > > > > Is there a documented procedure on how to add a new slave so
> > anyone in
> > > > the
> > > > > > team can do it?
> > > > > >
> > > > >
> > > > > I remember writing something about the templates and such, will look
> > for
> > > > it.
> > > > >
> > > > > Though it might be a bit old, not sure if we have changed anything in
> > > > the last
> > > > > months, maybe someone has newer info.
> > > >
> > > >
> > > > There's some docs here:
> > > >
> > > >
> > > >
> > http://ovirt-infra-docs.readthedocs.io/en/latest/Phoenix_Lab/oVirt_Hosts.html#ovirt-datacenter-organization
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Eyal Edri
> > > > > > Associate Manager
> > > > > > RHEV DevOps
> > > > > > EMEA ENG Virtualization R
> > > > > > Red Hat Israel
> > > > > >
> > > > > > phone: +972-9-7692018
> > > > > > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> > > > >
> > > > > > ___
> > > > > > Infra mailing list
> > > > > > Infra@ovirt.org
> > > > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > > >
> > > > >
> > > > > --
> > > > > David Caro
> > > > >
> > > > > Red Hat S.L.
> > > > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > > > >
> > > > > Tel.: +420 532 294 605
> > > > > Email: dc...@redhat.com
> > > > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > > > Web: www.redhat.com
> > > > > RHT Global #: 82-62605
> > > >
> > > >
> > > >
> > > > > ___
> > > > > Infra mailing list
> > > > > Infra@ovirt.org
> > > > > http

Re: ovirt-srv11

2016-05-12 Thread David Caro
On 05/12 15:49, Anton Marchukov wrote:
> Hello All.
> 
> The fixed hook is only in 3.6. The hook version in 3.5 does not work.
> Although we can install the hook from 3.6 into 3.5 as it should be
> compatible and this is how it was done on the test slave.

I was talking about using the hooks before installing the ssds, but if that can
be done reliably also before the upgrade it's also a solution that will help
scratch our current itches sooner.

> 
> Also just to remind some drawbacks of that solution as it was:
> 
> 1. It was not puppetized (not a drawback, just a reminder).
> 2. It will break vm migration. Means we will have to manually move the vms
> when we do host maintenance or need to automate this as fabric task.

^ well, the machines being slaves I think there's no problem having to stop
them to migrate them, we have already experience with both fabric and jenkins
so I guess it should not be hard to automate with those tools ;)
> 
> Anton.
> 
> On Thu, May 12, 2016 at 3:36 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/11 18:44, Eyal Edri wrote:
> > > On Wed, May 11, 2016 at 6:27 PM, David Caro <dc...@redhat.com> wrote:
> > >
> > > > On 05/11 18:21, Eyal Edri wrote:
> > > > > On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngol...@redhat.com>
> > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > > ovirt-srv11 host is in an empty cluster called
> > 'Production_CentOS', its
> > > > > > quite a strong machine with 251GB of ram, currently it has no VMs
> > and
> > > > as
> > > > > > far as I can tell isn't used at all.
> > > > > > I want to move it to the 'Jenkins_CentOS' cluster in order to add
> > more
> > > > VMs
> > > > > > and later upgrade the older clusters to el7(if we have enough
> > slaves
> > > > in the
> > > > > > Jenkins_CentOS cluster, we could just take the VMs down in the
> > Jenkins
> > > > > > cluster and upgrade). this is unrelated to the new hosts
> > > > ovirt-srv17-26.
> > > > > >
> > > > > >
> > > > > There is no reason to keep such a strong server there when we can
> > use it
> > > > > for many more slaves on Jenkins.
> > > > > If we need that extra server in Production DC (for hosted engine
> > > > redundancy
> > > > > and to allow maintenance) then lets take the lower end new servers
> > from
> > > > > 17-26 and replace it with the strong one.
> > > > > We need to utilize our servers, I don't think we're at 50%
> > utilization
> > > > > even, looking at the memory consumption last time i checked when all
> > > > slaves
> > > > > were working.
> > > >
> > > > As we already discussed, I strongly recommend implementing the local
> > disk
> > > > vms,
> > > > and keep an eye to the nfs load and net counters
> > > >
> > >
> > > I agree, and this is the plan, though before that we need:
> > >
> > >
> > >1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i
> > >prefer using it with engine 3.6)
> > >2. reinstall all servers from 01-10 to use the SSD
> >
> > ^ Do we need that? can't we start using the local disk hook on the new
> > servers?
> > >
> > >
> > > Once we do that we can start moving all VMs to use the local disk hook.
> > >
> > >
> > >
> > > >
> > > > >
> > > > >
> > > > >
> > > > > > I'm not sure why it was put there, so posting here if anyone
> > objects or
> > > > > > I'm missing something
> > > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > Nadav.
> > > > > >
> > > > > >
> > > > > > ___
> > > > > > Infra mailing list
> > > > > > Infra@ovirt.org
> > > > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Eyal Edri
> > > > > Associate Manager
> > > > > RHEV DevOps
> > > > > EMEA ENG Virtualization R
> > > > > Red Hat Isr

Re: capacity for more slaves

2016-05-12 Thread David Caro
On 05/12 16:50, Nadav Goldin wrote:
> ok I'll add also FC23, though I think the problem is the vdsm jobs, not sure
> why they use only fc* instead of the el7 ones

Thanks man, that's because (at least the check-merged) use libvirt to start up
vms, and trying to use libvirt from a fc23 chroot on an el7 host ends up in
error, so they have to run on fc23/21 slaves

> 
> On Thu, May 12, 2016 at 4:33 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/12 12:46, Nadav Goldin wrote:
> > > Hi, I added few more el7 slaves, there is a (relatively) new template
> > > 'centos72-jenkins-slave' in the Jenkins_CentOS cluster.
> > > iirc there was a discussion few months ago about our IPs limit, I think
> > > we're approaching that(there are
> > > 102 fixed addresses in foreman.phx.ovirt.org dhcpd server, and this is
> > > without the old ovirt-srv* which are not managed there)
> >
> > Actually, we are scarce on fc23 slaves, not el (right now we have 28 idle
> > el7
> > slaves, and I've been waiting for more than 45min for a lago check-patch to
> > start on a fc23 slave)
> >
> > >
> > >
> > > On Tue, May 10, 2016 at 7:04 PM, David Caro <dc...@redhat.com> wrote:
> > >
> > > > On 05/10 18:02, David Caro wrote:
> > > > > On 05/10 18:53, Eyal Edri wrote:
> > > > > > Looking at the load on our hypervisors I'm sure we can add more
> > slaves
> > > > to
> > > > > > jenkins.
> > > > > > Is there a documented procedure on how to add a new slave so
> > anyone in
> > > > the
> > > > > > team can do it?
> > > > > >
> > > > >
> > > > > I remember writing something about the templates and such, will look
> > for
> > > > it.
> > > > >
> > > > > Though it might be a bit old, not sure if we have changed anything in
> > > > the last
> > > > > months, maybe someone has newer info.
> > > >
> > > >
> > > > There's some docs here:
> > > >
> > > >
> > > >
> > http://ovirt-infra-docs.readthedocs.io/en/latest/Phoenix_Lab/oVirt_Hosts.html#ovirt-datacenter-organization
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Eyal Edri
> > > > > > Associate Manager
> > > > > > RHEV DevOps
> > > > > > EMEA ENG Virtualization R
> > > > > > Red Hat Israel
> > > > > >
> > > > > > phone: +972-9-7692018
> > > > > > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> > > > >
> > > > > > ___
> > > > > > Infra mailing list
> > > > > > Infra@ovirt.org
> > > > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > > >
> > > > >
> > > > > --
> > > > > David Caro
> > > > >
> > > > > Red Hat S.L.
> > > > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > > > >
> > > > > Tel.: +420 532 294 605
> > > > > Email: dc...@redhat.com
> > > > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > > > Web: www.redhat.com
> > > > > RHT Global #: 82-62605
> > > >
> > > >
> > > >
> > > > > ___
> > > > > Infra mailing list
> > > > > Infra@ovirt.org
> > > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > >
> > > >
> > > > --
> > > > David Caro
> > > >
> > > > Red Hat S.L.
> > > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > > >
> > > > Tel.: +420 532 294 605
> > > > Email: dc...@redhat.com
> > > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > > Web: www.redhat.com
> > > > RHT Global #: 82-62605
> > > >
> > > > ___
> > > > Infra mailing list
> > > > Infra@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > >
> > > >
> >
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-srv11

2016-05-12 Thread David Caro
On 05/11 18:44, Eyal Edri wrote:
> On Wed, May 11, 2016 at 6:27 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/11 18:21, Eyal Edri wrote:
> > > On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngol...@redhat.com>
> > wrote:
> > >
> > > > Hi,
> > > > ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its
> > > > quite a strong machine with 251GB of ram, currently it has no VMs and
> > as
> > > > far as I can tell isn't used at all.
> > > > I want to move it to the 'Jenkins_CentOS' cluster in order to add more
> > VMs
> > > > and later upgrade the older clusters to el7(if we have enough slaves
> > in the
> > > > Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins
> > > > cluster and upgrade). this is unrelated to the new hosts
> > ovirt-srv17-26.
> > > >
> > > >
> > > There is no reason to keep such a strong server there when we can use it
> > > for many more slaves on Jenkins.
> > > If we need that extra server in Production DC (for hosted engine
> > redundancy
> > > and to allow maintenance) then lets take the lower end new servers from
> > > 17-26 and replace it with the strong one.
> > > We need to utilize our servers, I don't think we're at 50% utilization
> > > even, looking at the memory consumption last time i checked when all
> > slaves
> > > were working.
> >
> > As we already discussed, I strongly recommend implementing the local disk
> > vms,
> > and keep an eye to the nfs load and net counters
> >
> 
> I agree, and this is the plan, though before that we need:
> 
> 
>1. backup & upgrade the HE instance to 3.5 (local hook is a 3.6, i
>prefer using it with engine 3.6)
>2. reinstall all servers from 01-10 to use the SSD

^ Do we need that? can't we start using the local disk hook on the new servers?
> 
> 
> Once we do that we can start moving all VMs to use the local disk hook.
> 
> 
> 
> >
> > >
> > >
> > >
> > > > I'm not sure why it was put there, so posting here if anyone objects or
> > > > I'm missing something
> > > >
> > > >
> > > > Thanks
> > > >
> > > > Nadav.
> > > >
> > > >
> > > > ___
> > > > Infra mailing list
> > > > Infra@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/infra
> > > >
> > > >
> > >
> > >
> > > --
> > > Eyal Edri
> > > Associate Manager
> > > RHEV DevOps
> > > EMEA ENG Virtualization R
> > > Red Hat Israel
> > >
> > > phone: +972-9-7692018
> > > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >
> 
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: capacity for more slaves

2016-05-12 Thread David Caro
On 05/12 12:46, Nadav Goldin wrote:
> Hi, I added few more el7 slaves, there is a (relatively) new template
> 'centos72-jenkins-slave' in the Jenkins_CentOS cluster.
> iirc there was a discussion few months ago about our IPs limit, I think
> we're approaching that(there are
> 102 fixed addresses in foreman.phx.ovirt.org dhcpd server, and this is
> without the old ovirt-srv* which are not managed there)

Actually, we are scarce on fc23 slaves, not el (right now we have 28 idle el7
slaves, and I've been waiting for more than 45min for a lago check-patch to
start on a fc23 slave)

> 
> 
> On Tue, May 10, 2016 at 7:04 PM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/10 18:02, David Caro wrote:
> > > On 05/10 18:53, Eyal Edri wrote:
> > > > Looking at the load on our hypervisors I'm sure we can add more slaves
> > to
> > > > jenkins.
> > > > Is there a documented procedure on how to add a new slave so anyone in
> > the
> > > > team can do it?
> > > >
> > >
> > > I remember writing something about the templates and such, will look for
> > it.
> > >
> > > Though it might be a bit old, not sure if we have changed anything in
> > the last
> > > months, maybe someone has newer info.
> >
> >
> > There's some docs here:
> >
> >
> > http://ovirt-infra-docs.readthedocs.io/en/latest/Phoenix_Lab/oVirt_Hosts.html#ovirt-datacenter-organization
> >
> > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Eyal Edri
> > > > Associate Manager
> > > > RHEV DevOps
> > > > EMEA ENG Virtualization R
> > > > Red Hat Israel
> > > >
> > > > phone: +972-9-7692018
> > > > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> > >
> > > > ___
> > > > Infra mailing list
> > > > Infra@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/infra
> > >
> > >
> > > --
> > > David Caro
> > >
> > > Red Hat S.L.
> > > Continuous Integration Engineer - EMEA ENG Virtualization R
> > >
> > > Tel.: +420 532 294 605
> > > Email: dc...@redhat.com
> > > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > > Web: www.redhat.com
> > > RHT Global #: 82-62605
> >
> >
> >
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-srv11

2016-05-11 Thread David Caro
On 05/11 18:21, Eyal Edri wrote:
> On Wed, May 11, 2016 at 5:36 PM, Nadav Goldin <ngol...@redhat.com> wrote:
> 
> > Hi,
> > ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its
> > quite a strong machine with 251GB of ram, currently it has no VMs and as
> > far as I can tell isn't used at all.
> > I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs
> > and later upgrade the older clusters to el7(if we have enough slaves in the
> > Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins
> > cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
> >
> >
> There is no reason to keep such a strong server there when we can use it
> for many more slaves on Jenkins.
> If we need that extra server in Production DC (for hosted engine redundancy
> and to allow maintenance) then lets take the lower end new servers from
> 17-26 and replace it with the strong one.
> We need to utilize our servers, I don't think we're at 50% utilization
> even, looking at the memory consumption last time i checked when all slaves
> were working.

As we already discussed, I strongly recommend implementing the local disk vms,
and keep an eye to the nfs load and net counters

> 
> 
> 
> > I'm not sure why it was put there, so posting here if anyone objects or
> > I'm missing something
> >
> >
> > Thanks
> >
> > Nadav.
> >
> >
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI cleanup failure

2016-05-11 Thread David Caro
On 05/11 18:06, Nir Soffer wrote:
> On Wed, May 11, 2016 at 4:20 PM, David Caro <dc...@redhat.com> wrote:
> > On 05/11 15:38, Nir Soffer wrote:
> >> This ioprocess build failed with ci related issue:
> >>
> >> http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-x86_64/1/
> >>
> >> 12:18:34 + mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
> >> 12:18:34 ++ mount
> >> 12:18:34 ++ awk '{print $3}'
> >> 12:18:34 ++ grep 
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b
> >> 12:18:34 + [[ -n
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc
> >> ]]
> >> 12:18:34 + echo 'Found mounted dirs inside the chroot
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b.'
> >> 'Trying to umount.'
> >> 12:18:34 Found mounted dirs inside the chroot
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b.
> >> Trying to umount.
> >> 12:18:34 + for mount in '"${mounts[@]}"'
> >> 12:18:34 + sudo umount
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc
> >> 12:18:34 umount:
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc:
> >> target is busy.
> >> 12:18:34 (In some cases useful info about processes that use
> >> 12:18:34  the device is found by lsof(8) or fuser(1))
> >> 12:18:34 + echo 'ERROR:  Failed to umount
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc.'
> >> 12:18:34 ERROR:  Failed to umount
> >> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc.
> >>
> >> We should check why the mountpoint is busy, but a quick fix may be using
> >> the --lazy option:
> >>
> >> umount --lazy /mountpoint
> >
> > That's only available on newer kernels, for the el7 slaves it's not yet 
> > there
> 
> Are you sure? Vdsm is using this for years.


You are right, from the manual:

   -l, --lazy
  Lazy unmount.  Detach the filesystem from the filesystem 
hierarchy now, and cleanup all references to the filesystem as soon as it is 
not busy anymore.  (Requires kernel 2.4.11 or later.)


I just confused 2.4.11 for 4.2.11... should be ok to use it

> 
> Nir

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-srv11

2016-05-11 Thread David Caro
On 05/11 17:36, Nadav Goldin wrote:
> Hi,
> ovirt-srv11 host is in an empty cluster called 'Production_CentOS', its
> quite a strong machine with 251GB of ram, currently it has no VMs and as
> far as I can tell isn't used at all.
> I want to move it to the 'Jenkins_CentOS' cluster in order to add more VMs
> and later upgrade the older clusters to el7(if we have enough slaves in the
> Jenkins_CentOS cluster, we could just take the VMs down in the Jenkins
> cluster and upgrade). this is unrelated to the new hosts ovirt-srv17-26.
> 
> I'm not sure why it was put there, so posting here if anyone objects or I'm
> missing something

I think it was being used to test the local disk hooks, amarchuk might know
more

> 
> 
> Thanks
> 
> Nadav.

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI cleanup failure

2016-05-11 Thread David Caro
On 05/11 15:38, Nir Soffer wrote:
> This ioprocess build failed with ci related issue:
> 
> http://jenkins.ovirt.org/job/ioprocess_master_build-artifacts-el7-x86_64/1/
> 
> 12:18:34 + mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
> 12:18:34 ++ mount
> 12:18:34 ++ awk '{print $3}'
> 12:18:34 ++ grep 
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b
> 12:18:34 + [[ -n
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc
> ]]
> 12:18:34 + echo 'Found mounted dirs inside the chroot
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b.'
> 'Trying to umount.'
> 12:18:34 Found mounted dirs inside the chroot
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b.
> Trying to umount.
> 12:18:34 + for mount in '"${mounts[@]}"'
> 12:18:34 + sudo umount
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc
> 12:18:34 umount:
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc:
> target is busy.
> 12:18:34 (In some cases useful info about processes that use
> 12:18:34  the device is found by lsof(8) or fuser(1))
> 12:18:34 + echo 'ERROR:  Failed to umount
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc.'
> 12:18:34 ERROR:  Failed to umount
> /var/lib/mock/fedora-22-x86_64-d496d3daa673001f46dbbd9addf9ba1b/root/proc.
> 
> We should check why the mountpoint is busy, but a quick fix may be using
> the --lazy option:
> 
> umount --lazy /mountpoint

That's only available on newer kernels, for the el7 slaves it's not yet there

> 
> Nir
> _______
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: restricting check patch/merged parallel jobs

2016-05-11 Thread David Caro
On 05/11 15:00, Nadav Goldin wrote:
> I enhanced this[1] graph to compare slaves utilization vs build queue, note
> that
> the slaves utilization is measured in percentages and the number of builds
> in the queue
> is absolute. basically when the red lines are high(large queue size) and
> the green ones(slaves
> utilization) are low we could have possibly had more builds running. we can
> see in the past few days
> we've reached nice utilization of ~ 90% and following that the queue size
> decreased pretty
> quickly, on the other hand there were times of only 16% utilization and a
> large queue ~ 70.
> last I checked the least significant problem is the OS as most standard-ci
> jobs
> are agnostic to EL/FC, usually it was the jobs limit, or sudden peeks in
> patches sent
> but I didn't get to add 'reason each job is waiting' metric yet, so its
> just a feeling.

You are not having into account yet that we are not using most of the hardware
we have yet, that will allow us to have more than twice the amount of slaves we
have now too (given that we solve any other bottleneck, like nfs storage)

> 
> maybe the Priority Sorter Plugin[2] which comes bundled with Jenkins
> could address the problem of jobs waiting 'unfairly' a long time in the
> queue,
> though it will require to define the priorities in the yamls.

Last time we tried using it, it ended messing up all the executions, mixing
slaves and creating a lot of failures, if you try it out, be very vigilant

> 
> 
> 
> [1]
> http://graphite.phx.ovirt.org/dashboard/db/jenkins-monitoring?panelId=16=146265480=1462966158602_interval=12h_labels=All_jobs_labels=All
> [2] https://wiki.jenkins-ci.org/display/JENKINS/Priority+Sorter+Plugin
> 
> On Wed, May 11, 2016 at 1:43 PM, Sandro Bonazzola <sbona...@redhat.com>
> wrote:
> 
> >
> >
> > On Wed, May 11, 2016 at 12:34 PM, Eyal Edri <ee...@redhat.com> wrote:
> >
> >> From what I saw, it was mostly ovirt-engine and vdsm jobs pending on the
> >> queue while other slaves are idle.
> >> we have over 40 slaves and we're about to add more, so I don't think that
> >> will be an issue and IMO 3 per job is not enough, especially if you get
> >> idle slaves.
> >>
> >>
> > +1 on raising then.
> >
> >
> >
> >> We are thinking on a more dynamic approach of dynamic vm allocation on
> >> demand, so in the long run we'll have more control over it,
> >> for now i'm monitoring the queue size and slaves on a regular basis [1],
> >> so if anything will get blocked too much time we'll act and adjust
> >> accordingly.
> >>
> >>
> >> [1] http://graphite.phx.ovirt.org/dashboard/db/jenkins-monitoring
> >>
> >> On Wed, May 11, 2016 at 1:10 PM, Sandro Bonazzola <sbona...@redhat.com>
> >> wrote:
> >>
> >>>
> >>>
> >>> On Tue, May 10, 2016 at 1:01 PM, Eyal Edri <ee...@redhat.com> wrote:
> >>>
> >>>> Shlomi,
> >>>> Can you submit a patch to increase the limit to 6 for (i think all jobs
> >>>> are using the same yaml template) and we'll continue to monitor to queue
> >>>> and see if there is an improvement in the utilization of slaves?
> >>>>
> >>>
> >>> Issue was that long lasting jobs caused queue to increase too much.
> >>> Example: a patch set rebased on master and merged will cause triggering
> >>> of check-merged jobs, upgrade jobs, ...; running 6 instance of each of 
> >>> them
> >>> will cause all other projects to be queued for a lot of time.
> >>>
> >>>
> >>>
> >>>>
> >>>> E.
> >>>>
> >>>> On Tue, May 10, 2016 at 1:58 PM, David Caro <dc...@redhat.com> wrote:
> >>>>
> >>>>> On 05/10 13:54, Eyal Edri wrote:
> >>>>> > Is there any reason we're limiting the amount of check patch & check
> >>>>> merged
> >>>>> > jobs to run only 3 in parallel?
> >>>>> >
> >>>>>
> >>>>> We had some mess in the past where enabling parallel runs did not
> >>>>> really force
> >>>>> not using the same slave at the same time, I guess we never reenabled
> >>>>> them.
> >>>>>
> >>>>> > Each jobs runs in mock and on its own VM, anything presenting us from
> >>>>> > removing this limitation so we won't have idle slaves while other
> >>>>> jobs a

Re: CI testes are failing , please check

2016-05-11 Thread David Caro
On 05/11 11:33, Eli Mesika wrote:
> Still failing after rebasing on the patch fixing that :
> 
> Proxy Error
> 
> The proxy server received an invalid
> response from an upstream server.
> 
> The proxy server could not handle the request *POST
> /job/ovirt-engine_master_check-patch-fc23-x86_64/879/logText/progressiveHtml
> <http://jenkins.ovirt.org/job/ovirt-engine_master_check-patch-fc23-x86_64/879/logText/progressiveHtml>*.
> 
> 
> Reason: *Error reading from remote server*


Where do you see that? it seems to me like jenkins service issue (though I'm
not seeing it).

Can you sen a screenshot?

> 
> 
> 
> 
> On Wed, May 11, 2016 at 11:19 AM, David Caro <dc...@redhat.com> wrote:
> 
> > On 05/11 11:16, Eli Mesika wrote:
> > > Hi
> > >
> > > CI tests are failing
> > > For example : https://gerrit.ovirt.org/#/c/53790/
> > >
> > > Please handle ASAP
> >
> >
> > It seems as real failures to me:
> >
> > 06:49:59 Results :
> > 06:49:59
> > 06:49:59 Failed tests:
> > 06:49:59
> >  
> > listOfMigrationPoliciesIsValid(org.ovirt.engine.core.config.entity.helper.MigrationPoliciesValueHelperTest):
> > expected: but was:
> >
> >
> > 06:55:59 Failed tests:
> > 06:55:59
> >  
> > listOfMigrationPoliciesIsValid(org.ovirt.engine.core.config.entity.helper.MigrationPoliciesValueHelperTest):
> > expected: but was:
> > 06:55:59
> >
> > >
> > > Thanks
> > > Eli
> >
> > > ___
> > > Infra mailing list
> > > Infra@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> > --
> > David Caro
> >
> > Red Hat S.L.
> > Continuous Integration Engineer - EMEA ENG Virtualization R
> >
> > Tel.: +420 532 294 605
> > Email: dc...@redhat.com
> > IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> > Web: www.redhat.com
> > RHT Global #: 82-62605
> >

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI testes are failing , please check

2016-05-11 Thread David Caro
On 05/11 11:16, Eli Mesika wrote:
> Hi
> 
> CI tests are failing
> For example : https://gerrit.ovirt.org/#/c/53790/
> 
> Please handle ASAP


It seems as real failures to me:

06:49:59 Results :
06:49:59 
06:49:59 Failed tests:
06:49:59   
listOfMigrationPoliciesIsValid(org.ovirt.engine.core.config.entity.helper.MigrationPoliciesValueHelperTest):
 expected: but was:


06:55:59 Failed tests: 
06:55:59   
listOfMigrationPoliciesIsValid(org.ovirt.engine.core.config.entity.helper.MigrationPoliciesValueHelperTest):
 expected: but was:
06:55:59

> 
> Thanks
> Eli

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: capacity for more slaves

2016-05-10 Thread David Caro
On 05/10 18:02, David Caro wrote:
> On 05/10 18:53, Eyal Edri wrote:
> > Looking at the load on our hypervisors I'm sure we can add more slaves to
> > jenkins.
> > Is there a documented procedure on how to add a new slave so anyone in the
> > team can do it?
> > 
> 
> I remember writing something about the templates and such, will look for it.
> 
> Though it might be a bit old, not sure if we have changed anything in the last
> months, maybe someone has newer info.


There's some docs here:

   
http://ovirt-infra-docs.readthedocs.io/en/latest/Phoenix_Lab/oVirt_Hosts.html#ovirt-datacenter-organization

> 
> > 
> > 
> > 
> > 
> > -- 
> > Eyal Edri
> > Associate Manager
> > RHEV DevOps
> > EMEA ENG Virtualization R
> > Red Hat Israel
> > 
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> 
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> 
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: capacity for more slaves

2016-05-10 Thread David Caro
On 05/10 18:53, Eyal Edri wrote:
> Looking at the load on our hypervisors I'm sure we can add more slaves to
> jenkins.
> Is there a documented procedure on how to add a new slave so anyone in the
> team can do it?
> 

I remember writing something about the templates and such, will look for it.

Though it might be a bit old, not sure if we have changed anything in the last
months, maybe someone has newer info.

> 
> 
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Need rsync anonymous access from Red Hat office in TLV

2016-05-10 Thread David Caro
On 05/10 15:21, Anton Marchukov wrote:
> Hello All.
> 
> Maybe it is time to start providing general anonymous access to resources
> over rsync protocol.
> 
> Technically we can do the following:
> 
> We now have resources files on a separate shared disk, we can create a new
> vm specially for rsync (and possible move all other protocols there) and
> then mount it read-only there so we mitigate any security risks and will
> never be able to change files from that vm. This is how we planned to
> improve resources initially.
> 
> The only thing is that afaik rsync protocol is not authenticated and
> encrypted. There is nothing secret on resources, but the files might be
> tampered along the way and I am not sure all rpms there have crypto
> signatures.

Only the official releases are signed, though I'm not 100% sure that will
ensure integrity (I guess it does though, would be easy and highly beneficial)

> 
> Anton.
> 
> 
> On Tue, May 10, 2016 at 3:13 PM, Dotan Paz <d...@redhat.com> wrote:
> 
> > Hi,
> > In order to support the RHEV CI's request to sync the repo to tlv,  i'd
> > need to have anonymous  from tlv over rsync , IP : 82.81.161.50
> >
> > Thanks
> >
> > --
> >
> > Dotan Paz , Systems Administrator
> > Labs & Capital Management ,
> > PnT DevOps
> > Red Hat inc.
> >
> >
> >
> >
> 
> 
> -- 
> Anton Marchukov
> Senior Software Engineer - RHEV CI - Red Hat

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-518) Re: github mirroring broken?

2016-05-10 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15640#comment-15640
 ] 

David Caro Estevez commented on OVIRT-518:
--

Right now the repo that is not syncing is ovirt-engine:

{code:java}

[2016-05-10 07:07:02,514] [] scheduling replication 
ovirt-engine:refs/heads/master => g...@github.com:oVirt/ovirt-engine.git
[2016-05-10 07:07:02,514] [] scheduled ovirt-engine:refs/heads/master => 
[280250e8] push g...@github.com:oVirt/ovirt-engine.git to run after 0s
[2016-05-10 07:07:02,514] [280250e8] Replication to 
g...@github.com:oVirt/ovirt-engine.git started...
[2016-05-10 07:07:04,228] [280250e8] Push to 
g...@github.com:oVirt/ovirt-engine.git references: 
[RemoteRefUpdate[remoteName=refs/heads/master, NOT_ATTEMPTED, 
(null)...980f9621f2d029973397c94874bae4908e1cbe4b, srcRef=refs/heads/master, 
forceUpdate, message=null]]
[2016-05-10 07:07:05,804] [280250e8] Unexpected error during replication to 
g...@github.com:oVirt/ovirt-engine.git
java.lang.NoSuchMethodError: 
org.eclipse.jgit.internal.storage.pack.PackWriter.release()V
at 
org.eclipse.jgit.transport.BasePackPushConnection.writePack(BasePackPushConnection.java:307)
at 
org.eclipse.jgit.transport.BasePackPushConnection.doPush(BasePackPushConnection.java:197)
at 
org.eclipse.jgit.transport.BasePackPushConnection.push(BasePackPushConnection.java:152)
at org.eclipse.jgit.transport.PushProcess.execute(PushProcess.java:166)
at org.eclipse.jgit.transport.Transport.push(Transport.java:1200)
at org.eclipse.jgit.transport.Transport.push(Transport.java:1246)
at 
com.googlesource.gerrit.plugins.replication.PushOne.pushVia(PushOne.java:403)
at 
com.googlesource.gerrit.plugins.replication.PushOne.runImpl(PushOne.java:375)
at 
com.googlesource.gerrit.plugins.replication.PushOne.runPushOperation(PushOne.java:288)
at 
com.googlesource.gerrit.plugins.replication.PushOne.access$000(PushOne.java:81)
at 
com.googlesource.gerrit.plugins.replication.PushOne$1.call(PushOne.java:258)
at 
com.googlesource.gerrit.plugins.replication.PushOne$1.call(PushOne.java:255)
at 
com.google.gerrit.server.util.RequestScopePropagator$5.call(RequestScopePropagator.java:222)
at 
com.google.gerrit.server.util.RequestScopePropagator$4.call(RequestScopePropagator.java:201)
at 
com.google.gerrit.server.git.PerThreadRequestScope$Propagator$1.call(PerThreadRequestScope.java:75)
at 
com.googlesource.gerrit.plugins.replication.PushOne.run(PushOne.java:261)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:377)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}



> Re: github mirroring broken?
> 
>
> Key: OVIRT-518
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-518
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: eyal edri [Administrator]
>Assignee: Shlomo Ben David
>
> On May 4, 2016 11:03 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
> > Hi all,
> >
> > Looks like vdsm github mirroring is broken, github is about 2 days after
> > master.
> > https://github.com/oVirt/vdsm
> >
> > Latest patch on github is from May 2 15:58:37 2016 -0400
> >
> > Latest patch in master is from May 4 03:14:47 2016 -0400
> >
> > Nir
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-518) Re: github mirroring broken?

2016-05-10 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638#comment-15638
 ] 

David Caro Estevez commented on OVIRT-518:
--

This issue is still happening though, just take a peek at the gerrit 
replication logs. It seems that the repos that fail to sync go changing from 
time to time.

As we talked about through irc, I think that the issue is that old jar at 
/home/gerrit2 that is being picked up by gerrit that is messing it up, I think 
it's worth trying to move the jar out of the way and restarting gerrit

> Re: github mirroring broken?
> 
>
> Key: OVIRT-518
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-518
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: eyal edri [Administrator]
>Assignee: Shlomo Ben David
>
> On May 4, 2016 11:03 AM, "Nir Soffer" <nsof...@redhat.com> wrote:
> > Hi all,
> >
> > Looks like vdsm github mirroring is broken, github is about 2 days after
> > master.
> > https://github.com/oVirt/vdsm
> >
> > Latest patch on github is from May 2 15:58:37 2016 -0400
> >
> > Latest patch in master is from May 4 03:14:47 2016 -0400
> >
> > Nir
> > ___
> > Infra mailing list
> > Infra@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: restricting check patch/merged parallel jobs

2016-05-10 Thread David Caro
On 05/10 13:54, Eyal Edri wrote:
> Is there any reason we're limiting the amount of check patch & check merged
> jobs to run only 3 in parallel?
> 

We had some mess in the past where enabling parallel runs did not really force
not using the same slave at the same time, I guess we never reenabled them.

> Each jobs runs in mock and on its own VM, anything presenting us from
> removing this limitation so we won't have idle slaves while other jobs are
> in the queue?
> 
> We can increase it at least to a higher level if we won't one specific job
> to take over all slaves and starve other jobs, but i think ovirt-engine
> jobs are probably the biggest consumer of ci, so the threshold should be
> updated.

+1

> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: GitHub - pinterest/knox: Knox is a secret management service

2016-05-09 Thread David Caro Estevez
Yet another https://sobolevn.github.io/git-secret/#usage

David Caro

El 8 may. 2016 22:36, Eyal Edri <ee...@redhat.com> escribió:

Another  password savings solution https://github.com/pinterest/knox

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-522) slaves going offline during the build

2016-05-06 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15503#comment-15503
 ] 

David Caro Estevez commented on OVIRT-522:
--

btw. the date and time the disconnect happened is arount 10:55 (in the 
machine's timezone, and jenkins one, UTC)

> slaves going offline during the build
> -
>
> Key: OVIRT-522
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-522
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> *http://jenkins.ovirt.org/job/ovirt-engine_3.6.6_build-artifacts-el6-x86_64/42/console
> <http://jenkins.ovirt.org/job/ovirt-engine_3.6.6_build-artifacts-el6-x86_64/42/console>*
> *09:47:15* [INFO]Compiling 32 permutations*09:47:16* [INFO]
> Compiling permutation 0...*09:50:17* [INFO]   Compiling
> permutation 1...*09:53:02* [INFO]   Compiling permutation
> 2...*09:55:15* [INFO]   Compiling permutation 3...*09:57:26*
> [INFO]   Compiling permutation 4...*09:59:53* [INFO]
> Compiling permutation 5...*10:21:47* [INFO]   Compiling
> permutation 6...*10:27:36* [INFO]   Compiling permutation
> 7...*10:39:02* [INFO]   Compiling permutation 8...*10:55:46*
> [INFO]   Compiling permutation 9...*10:55:47* Slave went offline
> during the build
> <http://jenkins.ovirt.org/computer/el7-vm13.phx.ovirt.org/log>*10:55:47*
> Build step 'Execute shell' marked build as failure
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-522) slaves going offline during the build

2016-05-06 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez reassigned OVIRT-522:


Assignee: David Caro Estevez  (was: infra)

> slaves going offline during the build
> -
>
> Key: OVIRT-522
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-522
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>    Assignee: David Caro Estevez
>
> *http://jenkins.ovirt.org/job/ovirt-engine_3.6.6_build-artifacts-el6-x86_64/42/console
> <http://jenkins.ovirt.org/job/ovirt-engine_3.6.6_build-artifacts-el6-x86_64/42/console>*
> *09:47:15* [INFO]Compiling 32 permutations*09:47:16* [INFO]
> Compiling permutation 0...*09:50:17* [INFO]   Compiling
> permutation 1...*09:53:02* [INFO]   Compiling permutation
> 2...*09:55:15* [INFO]   Compiling permutation 3...*09:57:26*
> [INFO]   Compiling permutation 4...*09:59:53* [INFO]
> Compiling permutation 5...*10:21:47* [INFO]   Compiling
> permutation 6...*10:27:36* [INFO]   Compiling permutation
> 7...*10:39:02* [INFO]   Compiling permutation 8...*10:55:46*
> [INFO]   Compiling permutation 9...*10:55:47* Slave went offline
> during the build
> <http://jenkins.ovirt.org/computer/el7-vm13.phx.ovirt.org/log>*10:55:47*
> Build step 'Execute shell' marked build as failure
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-522) slaves going offline during the build

2016-05-06 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502#comment-15502
 ] 

David Caro Estevez commented on OVIRT-522:
--

The jenkins slave died yes, that's on the logs.

On the slave side, the only thing I've found so far is that puppet was running 
at that time:

May  6 10:51:39 el7-vm13 systemd: Starting Cleanup of Temporary Directories...
May  6 10:52:04 el7-vm13 systemd: Started Cleanup of Temporary Directories.
May  6 10:55:56 el7-vm13 puppet-agent[1368]: Finished catalog run in 735.75 
seconds
May  6 10:57:38 el7-vm13 systemd: Started Session 4446 of user jenkins.
May  6 10:57:38 el7-vm13 systemd-logind: New session 4446 of user jenkins.


Will try to check what was it doing, maybe there's a manifest that messes up 
with the slave jar or something

> slaves going offline during the build
> -
>
> Key: OVIRT-522
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-522
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: sbonazzo
>Assignee: infra
>
> *http://jenkins.ovirt.org/job/ovirt-engine_3.6.6_build-artifacts-el6-x86_64/42/console
> <http://jenkins.ovirt.org/job/ovirt-engine_3.6.6_build-artifacts-el6-x86_64/42/console>*
> *09:47:15* [INFO]Compiling 32 permutations*09:47:16* [INFO]
> Compiling permutation 0...*09:50:17* [INFO]   Compiling
> permutation 1...*09:53:02* [INFO]   Compiling permutation
> 2...*09:55:15* [INFO]   Compiling permutation 3...*09:57:26*
> [INFO]   Compiling permutation 4...*09:59:53* [INFO]
> Compiling permutation 5...*10:21:47* [INFO]   Compiling
> permutation 6...*10:27:36* [INFO]   Compiling permutation
> 7...*10:39:02* [INFO]   Compiling permutation 8...*10:55:46*
> [INFO]   Compiling permutation 9...*10:55:47* Slave went offline
> during the build
> <http://jenkins.ovirt.org/computer/el7-vm13.phx.ovirt.org/log>*10:55:47*
> Build step 'Execute shell' marked build as failure
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com



--
This message was sent by Atlassian JIRA
(v1000.5.2#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: sonar and github

2016-05-05 Thread David Caro Estevez
On 05/05 09:13, David Caro Estevez wrote:
> 
> Hey Roman,
> 
> Adding the infra list

Forgot to add them XP

> 
> On 05/05 08:57, Roman Mohr wrote:
> > Hi David,
> > 
> > I have asked sonarqube if they would add ovirt-engine to
> > https://nemo.sonarqube.org/.
> > 
> > sonarqube is a pretty nice tool for source code analysis. It has a slightly
> > different focus than coverity and could be very useful for us.
> 
> Have you discussed this with the ovirt-engine maintainers/devs? Not that I
> think it would be an issue, but usually people don't like surprises :)
> 
> > 
> > They are happy to add us. In the past they just built everything on nemo
> > and published the results but they are switching to building on travis and
> > just upload the results.
> > 
> > Do you think you could give me access to our ovirt-engine github repo?
> 
> I can add the project, no problem, you can just make sure to create the new
> branch with the travis yaml (if noone has issues with it).
> 
> > 
> > I would do the following:
> >  - prepare a .travis.yml file on a separate branch
> >  - configure an account on nemo with the help of a sonarqube guy
> 
> ^ the accounts are free? Can we create a project and add multiple admin
> accounts? If not, we should find a way to share that account to avoid a single
> maintainer
> 
> >  - enable travis builds
> >  - when everything works I would add the .travis.yml file through a normal
> > gerrit patch
> >  - give up my github permissions if required ;)
> > 
> > Roman
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-520) Document infra user addition (probably automate)

2016-05-04 Thread David Caro Estevez (oVirt JIRA)
David Caro Estevez created OVIRT-520:


 Summary: Document infra user addition (probably automate)
 Key: OVIRT-520
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-520
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: David Caro Estevez
Assignee: infra


Current steps:

  * send patch and get it merged
  * Import puppet classes from proxy
  * Add the new class to the production hostgroup
  * Add the password parameter with the hashed password



--
This message was sent by Atlassian JIRA
(v1000.5.1#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-519) Documment encrypted passwords file handling

2016-05-04 Thread David Caro Estevez (oVirt JIRA)
David Caro Estevez created OVIRT-519:


 Summary: Documment encrypted passwords file handling
 Key: OVIRT-519
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-519
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: David Caro Estevez
Assignee: infra






--
This message was sent by Atlassian JIRA
(v1000.5.1#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-511) Puppetize/Document PHX Hypervisors

2016-05-02 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306#comment-15306
 ] 

David Caro Estevez commented on OVIRT-511:
--

The docs already exist: 
http://ovirt-infra-docs.readthedocs.io/en/latest/Phoenix_Lab/oVirt_Hosts.html#network-configuration

> Puppetize/Document PHX Hypervisors 
> ---
>
> Key: OVIRT-511
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-511
> Project: oVirt - virtualization made easy
>  Issue Type: Task
>Reporter: eyal edri [Administrator]
>Assignee: infra
>Priority: High
>
> We have some non-standard configuration on the PHX hypervisors, including 
> specific network conf for bond/LCAP.
> We need to ensure this configuration is documented or if possible puppetized.



--
This message was sent by Atlassian JIRA
(v1000.5.1#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Do we have or can have ppc64 and ppc64le slaves in Jenkins?

2016-04-29 Thread David Caro
On 04/29 13:08, Juan Hernández wrote:
> On 04/29/2016 01:07 PM, David Caro wrote:
> > On 04/29 10:52, Juan Hernández wrote:
> >> Hello,
> >> 
> >> Some of the packages that I maintain have native components that
> >> depend on the underlying processor architecture. Currently we are
> >> building these packages only for x86_64. Do we have or can we
> >> have ppc64 and ppc64 le Jenkins slaves in order to perform the
> >> tests and builds there?
> >> 
> > 
> > It's on the works
> > 
> 
> Any estimation of when will it be available? Will it be usable for the
> oVirt 4 builds?

We are trying to get them up and running asap, but we are having some issues
with the hosting. It's quite prioritary, so I guess that they will be up and
running in ~2weeks, but well, you know, best effort.


> 
> -- 
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> 3ºD, 28016 Madrid, Spain
> Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
> _______
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Do we have or can have ppc64 and ppc64le slaves in Jenkins?

2016-04-29 Thread David Caro
On 04/29 10:52, Juan Hernández wrote:
> Hello,
> 
> Some of the packages that I maintain have native components that depend
> on the underlying processor architecture. Currently we are building
> these packages only for x86_64. Do we have or can we have ppc64 and
> ppc64 le Jenkins slaves in order to perform the tests and builds there?
> 

It's on the works

> Thanks in advance,
> Juan Hernandez
> 
> -- 
> Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> 3ºD, 28016 Madrid, Spain
> Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-507) Can't move jira bugs from blocked status

2016-04-28 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez reassigned OVIRT-507:


Assignee: eyal edri [Administrator]  (was: infra)

> Can't move jira bugs from blocked status
> 
>
> Key: OVIRT-507
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-507
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>    Reporter: David Caro Estevez
>Assignee: eyal edri [Administrator]
>Priority: High
>
> Since a few days ago, if an issue is is blocked status, you can't (at least I 
> can't) move it to any other status, not even closing it so they remain there, 
> for example:
> https://ovirt-jira.atlassian.net/browse/OVIRT-504



--
This message was sent by Atlassian JIRA
(v1000.5.0#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-507) Can't move jira bugs from blocked status

2016-04-28 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez updated OVIRT-507:
-
Status: Blocked  (was: Blocked)

> Can't move jira bugs from blocked status
> 
>
> Key: OVIRT-507
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-507
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>    Reporter: David Caro Estevez
>Assignee: infra
>Priority: High
>
> Since a few days ago, if an issue is is blocked status, you can't (at least I 
> can't) move it to any other status, not even closing it so they remain there, 
> for example:
> https://ovirt-jira.atlassian.net/browse/OVIRT-504



--
This message was sent by Atlassian JIRA
(v1000.5.0#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-507) Can't move jira bugs from blocked status

2016-04-28 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez updated OVIRT-507:
-
Status: Blocked  (was: Blocked)

> Can't move jira bugs from blocked status
> 
>
> Key: OVIRT-507
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-507
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>    Reporter: David Caro Estevez
>Assignee: infra
>Priority: High
>
> Since a few days ago, if an issue is is blocked status, you can't (at least I 
> can't) move it to any other status, not even closing it so they remain there, 
> for example:
> https://ovirt-jira.atlassian.net/browse/OVIRT-504



--
This message was sent by Atlassian JIRA
(v1000.5.0#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-507) Can't move jira bugs from blocked status

2016-04-28 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez updated OVIRT-507:
-
Status: Blocked  (was: New)

> Can't move jira bugs from blocked status
> 
>
> Key: OVIRT-507
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-507
> Project: oVirt - virtualization made easy
>  Issue Type: Bug
>    Reporter: David Caro Estevez
>Assignee: infra
>Priority: High
>
> Since a few days ago, if an issue is is blocked status, you can't (at least I 
> can't) move it to any other status, not even closing it so they remain there, 
> for example:
> https://ovirt-jira.atlassian.net/browse/OVIRT-504



--
This message was sent by Atlassian JIRA
(v1000.5.0#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-507) Can't move jira bugs from blocked status

2016-04-28 Thread David Caro Estevez (oVirt JIRA)
David Caro Estevez created OVIRT-507:


 Summary: Can't move jira bugs from blocked status
 Key: OVIRT-507
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-507
 Project: oVirt - virtualization made easy
  Issue Type: Bug
Reporter: David Caro Estevez
Assignee: infra
Priority: High


Since a few days ago, if an issue is is blocked status, you can't (at least I 
can't) move it to any other status, not even closing it so they remain there, 
for example:

https://ovirt-jira.atlassian.net/browse/OVIRT-504



--
This message was sent by Atlassian JIRA
(v1000.5.0#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt/ovirt-engine] Eayunos 4.2 (#5)

2016-04-27 Thread David Caro
On 04/27 14:53, Eyal Edri wrote:
> Not the first time.
> I think there was a task or thread on blocking this option if possible and
> adding a message on GitHub that requests will be accepted only via Gerrit.
> OTOH, If we see there is a large group of developers contributing from
> GitHub, we might want to consider adding the GitHub gerrit plugin [1] which
> converts PR to ChangeSets.
> 
> [1] https://gerrit.googlesource.com/plugins/github/+/master/README.md


Most of the commits there are from current oVirt devels, not new devs, thoug
it's interesting to see though where it comes from :)

https://github.com/eayun

That lead me also to:
https://github.com/ovirt-china

^ are we related to that last one in any way?

> 
> On Wed, Apr 27, 2016 at 2:49 PM, Barak Korren  wrote:
> 
> > This looks like a merged PR for ovirt-engine on github! wtf is going on?
> > ‏-- הודעה שהועברה --
> > מאת: "Hubtang" 
> > תאריך: 27 באפר׳ 2016 11:56
> > נושא: [oVirt/ovirt-engine] Eayunos 4.2 (#5)
> > אל: "oVirt/ovirt-engine" 
> > ‏עותק:
> >
> > Modify the welcome page & login page, and develop some widgets with an
> >> icon on the side of left via extends AbstractValidatedWidgetWithLabel.
> >> --
> >> You can view, comment on, or merge this pull request online at:
> >>
> >>   https://github.com/oVirt/ovirt-engine/pull/5
> >> Commit Summary
> >>
> >>- core: make OsRepository injectable using @Inject
> >>- core: hosted-engine: set the os type to linux on import
> >>- core: hosted-engine: missing config from engine-config
> >>- restapi: Fix update of virtio_scsi.enabled
> >>- core,webadmin: Not validating HostNetworkQos values
> >>- userportal: show fill in fields message
> >>- userportal: Fixed New Template dialog size
> >>- engine,userportal,webadmin: log uncaught UI exceptions
> >>- webadmin: auto-scroll SetupNetworks interfaces list during
> >>drag-and-drop
> >>- host-deploy: attemptConnection() should not throw exceptions
> >>- engine: both null and empty gateway should be considered as no
> >>gateway
> >>- packaging: rename: fixes for vmconsole proxy helper
> >>- core: Ignore expected exception during connection attempts
> >>- packaging: moving the ovirt extentions api dependencies to his own
> >>line
> >>- webadmin: Added fontend null checks
> >>- automation: add quick validation
> >>- core: Fix of merging of VmDynamics form DB and VDSM
> >>- automation: reducing GWT workers
> >>- jsonrpc: bump version
> >>- webadmin: Fix ClassCastException when importing VMs with pinned host
> >>- core: VmDevice null safe hashCode() method
> >>- engine: Do not assign creator as owner to VMs from pool
> >>- Add Attach Disk/Cpu profile permissions to all Import/Export
> >>capable roles
> >>- core: adding missing ctors
> >>- engine: Null ip configuration should be considered as NONE
> >>- webadmin: Add warning on live snapshot on VM without guest agent
> >>- core: missing constructor
> >>- frontend: Fix name uniqueness chack in ImportVmModel
> >>- frontend: Fix NPE in ImportVms Dialog
> >>- webadmin: Fix ClassCastException in SearchableListModel
> >>- core: Allow GetVmTemplatesByStoragePoolIdQuery in UserPortal
> >>- restapi: Populate vm.placement_policy.affinity
> >>- utils: Remove extra parameters in copyVmDevices
> >>- core: Prevent host device copy on pinned host mismatch
> >>- core: allow to balance migrate vm that run on specific host
> >>- build: post ovirt-engine-3.6.3 branching
> >>- core: fix remove disk snapshots for cinder
> >>- Use the proper whitelist for MigrateVmCommand.canDoAction
> >>- webadmin: Release keys removed from the console title
> >>- packaging: spec: Fix semanage requirement in fedora 23
> >>- packaging: setup: Do not require ovirt-log-collector
> >>- core: CreateSnapshot - add parameters ctor
> >>- webadmin: Uninitialized host fails to open hardware tab
> >>- webadmin: new pool dialog contained the sub version field
> >>- core: Reimplement PKIResources.Resource as class
> >>- Revert "core,webadmin: Not validating HostNetworkQos values"
> >>- webadmin: added warning when import windows without virtio
> >>- webadmin: change sizes of suggest boxes
> >>- webadmin: show the "latest" template only when the new VM is
> >>stateless
> >>- webadmin: use the cluster version rather than host one
> >>- webadmin: Consistent relation memory - guaranteed memory
> >>- core: Only allow continuous numa node indices
> >>- packaging: rename: Handle storage domains more nicely
> >>- core: cleanup cmd data when compensation deleted
> >>- webadmin: Warn for suspended VMs when cluster change
> >>- restapi: Fix RSDL for adding clusters
> >>- core: 

Re: Fwd: [oVirt/ovirt-engine] Eayunos 4.2 (#5)

2016-04-27 Thread David Caro
On 04/27 14:49, Barak Korren wrote:
> This looks like a merged PR for ovirt-engine on github! wtf is going on?

I don't see it as merged, actually I see:

This branch has conflicts that must be resolved


There's no way to disable pull requests on github :/

> ‏-- הודעה שהועברה --
> מאת: "Hubtang" 
> תאריך: 27 באפר׳ 2016 11:56
> נושא: [oVirt/ovirt-engine] Eayunos 4.2 (#5)
> אל: "oVirt/ovirt-engine" 
> ‏עותק:
> 
> Modify the welcome page & login page, and develop some widgets with an icon
> > on the side of left via extends AbstractValidatedWidgetWithLabel.
> > --
> > You can view, comment on, or merge this pull request online at:
> >
> >   https://github.com/oVirt/ovirt-engine/pull/5
> > Commit Summary
> >
> >- core: make OsRepository injectable using @Inject
> >- core: hosted-engine: set the os type to linux on import
> >- core: hosted-engine: missing config from engine-config
> >- restapi: Fix update of virtio_scsi.enabled
> >- core,webadmin: Not validating HostNetworkQos values
> >- userportal: show fill in fields message
> >- userportal: Fixed New Template dialog size
> >- engine,userportal,webadmin: log uncaught UI exceptions
> >- webadmin: auto-scroll SetupNetworks interfaces list during
> >drag-and-drop
> >- host-deploy: attemptConnection() should not throw exceptions
> >- engine: both null and empty gateway should be considered as no
> >gateway
> >- packaging: rename: fixes for vmconsole proxy helper
> >- core: Ignore expected exception during connection attempts
> >- packaging: moving the ovirt extentions api dependencies to his own
> >line
> >- webadmin: Added fontend null checks
> >- automation: add quick validation
> >- core: Fix of merging of VmDynamics form DB and VDSM
> >- automation: reducing GWT workers
> >- jsonrpc: bump version
> >- webadmin: Fix ClassCastException when importing VMs with pinned host
> >- core: VmDevice null safe hashCode() method
> >- engine: Do not assign creator as owner to VMs from pool
> >- Add Attach Disk/Cpu profile permissions to all Import/Export capable
> >roles
> >- core: adding missing ctors
> >- engine: Null ip configuration should be considered as NONE
> >- webadmin: Add warning on live snapshot on VM without guest agent
> >- core: missing constructor
> >- frontend: Fix name uniqueness chack in ImportVmModel
> >- frontend: Fix NPE in ImportVms Dialog
> >- webadmin: Fix ClassCastException in SearchableListModel
> >- core: Allow GetVmTemplatesByStoragePoolIdQuery in UserPortal
> >- restapi: Populate vm.placement_policy.affinity
> >- utils: Remove extra parameters in copyVmDevices
> >- core: Prevent host device copy on pinned host mismatch
> >- core: allow to balance migrate vm that run on specific host
> >- build: post ovirt-engine-3.6.3 branching
> >- core: fix remove disk snapshots for cinder
> >- Use the proper whitelist for MigrateVmCommand.canDoAction
> >- webadmin: Release keys removed from the console title
> >- packaging: spec: Fix semanage requirement in fedora 23
> >- packaging: setup: Do not require ovirt-log-collector
> >- core: CreateSnapshot - add parameters ctor
> >- webadmin: Uninitialized host fails to open hardware tab
> >- webadmin: new pool dialog contained the sub version field
> >- core: Reimplement PKIResources.Resource as class
> >- Revert "core,webadmin: Not validating HostNetworkQos values"
> >- webadmin: added warning when import windows without virtio
> >- webadmin: change sizes of suggest boxes
> >- webadmin: show the "latest" template only when the new VM is
> >stateless
> >- webadmin: use the cluster version rather than host one
> >- webadmin: Consistent relation memory - guaranteed memory
> >- core: Only allow continuous numa node indices
> >- packaging: rename: Handle storage domains more nicely
> >- core: cleanup cmd data when compensation deleted
> >- webadmin: Warn for suspended VMs when cluster change
> >- restapi: Fix RSDL for adding clusters
> >- core: CreateAllSnapshotsFromVmCommand - use snapshot type from params
> >- core: serialization/deserialization of SnapshotType
> >- core: Do not acquire in left-most-available order
> >- core: add code setting storage pool id in
> >DetachNetworkFromVdsInterfaceCommand
> >- core: Case-insensitive match of file name patterns
> >- webadmin: Edit Fence Agent
> >- core: Copy non template disks correctly
> >- webadmin: Updated "Specific" label in Edit VM dialog
> >- webadmin: Edit VM dialog -- capitalize "Host"
> >- core: hosted engine: Avoid choosing the hosted engine sd as master
> >- webadmin: Block restore memory on newer compatibility versions
> >- scheduler: Add cpu pinning policy 

Re: none-yamlized jobs

2016-04-26 Thread David Caro
On 04/26 14:04, Sandro Bonazzola wrote:
> On Sun, Apr 24, 2016 at 8:02 PM, Nadav Goldin <ngol...@redhat.com> wrote:
> 
> > Hey Sandro,
> > [1] is a list of all the none-yamlized jobs in jenkins-old.ovirt.org, can
> > you help us map which jobs still need to be enabled? we already mapped dao
> > and find_bugs, we want to minimize the number of jobs that are not yamlized
> > yet and must be enabled in jenkins-old.ovirt.org
> >
> >
> > Thanks,
> >
> > Nadav.
> >
> >
> >
> > [1] https://paste.fedoraproject.org/359265/
> >
> 
> 
> 
> httpcomponents-client_master_create-rpms_merged
> httpcomponents-core_master_create-rpms_merged
> vhostmd_create-rpms_el6
> vhostmd_create-rpms_el7
> all can be dropped after publishing rpms in static repos
> 
> 
> archive_jobs_removed_from_yaml_test_pdangur
> mom_any_create-rpms_manual
> otopi_any_create-rpms_manual
> ovirt-dwh_any_create-rpms_manual
> ovirt-engine-cli_any_create-rpms_manual
> ovirt-engine-extension-aaa-ldap_any_create-rpms_manual
> ovirt-engine-extension-aaa-misc_any_create-rpms_manual
> ovirt-engine-extension-logger-log4j_any_create-rpms_manual
> ovirt-hosted-engine-setup_any_create-rpms_manual
> vdsm_any_create-rpms_manual
> vdsm-jsonrpc-java_any_create-rpms_manual
> ovirt-host-deploy_any_create-rpms_manual
> ovirt-hosted-engine-ha_any_create-rpms_manual
> ovirt-image-uploader_any_create-rpms_manual
> ovirt-iso-uploader_any_create-rpms_manual
> ovirt-live_3.6-create-iso
> ovirt-live_master-create-iso
> ovirt-log-collector_any_create-rpms_manual
> ovirt-reports_any_create-rpms_manual
> ovirt-setup-lib_any_create-rpms_manual
> repos_3.6_check-closure_merged
> repos_master_check-closure_merged
> spagobi_repo_merged
> All needed
> 
> 
> ovirt-hosted-engine-ha_gerrit
> ovirt-engine-jboss-as_master_create-rpms_merged
> all can be dropped
> 
> PatchMate-commit-hook
> ovirt-appliance_for-testing_build-artifacts-el7-x86_64
> ovirt-engine_3.6_dao-unit-tests_created
> ovirt-engine_3.6_dao-unit-tests_merged
> ovirt-engine_3.6_find-bugs_gerrit
> ovirt-engine_3.6_find-bugs_merged
> ovirt-engine_3.6_style_gerrit
> ovirt-engine_3.6_unit-tests_gerrit
> ovirt-engine_3.6_unit-tests_merged
> ovirt-engine_master_build-artifacts-el6-x86_64_no_spm_testing
> ovirt-engine_master_build-artifacts-el7-x86_64_no_spm_testing
> ovirt-engine_master_check-merged-el7-x86_64-testing-clone-github
> ovirt-engine_master_coverity-analysis_merged
> ovirt-engine_master_dao-unit-tests_created
> ovirt-engine_master_dao-unit-tests_merged
> ovirt-engine_master_find-bugs_gerrit
> ovirt-engine_master_find-bugs_gerrit_juan
> ovirt-engine_master_find-bugs_merged
> ovirt-engine_master_style_gerrit
> ovirt-engine_master_unit-tests_gerrit
> ovirt-engine_master_unit-tests_merged
> ovirt-node-plugin-hosted-engine_master_create-rpms_merged
> ovirt-node_master_check-local_gerrit
> ovirt-optimizer_master_create-rpms_gerrit
> ovirt-scheduler-proxy_compile_gerrit
> ovirt-setup-lib_unit-tests
> ovirt-vmconsole_any_create-rpms_manual
> ovirt-vmconsole_master_create-rpms-el6-x86_64_created
> ovirt-vmconsole_master_create-rpms-el6-x86_64_merged
> ovirt_3.6_image-system-tests
> ovirt_integration-test-poc_created
> qemu-kvm-rhev_create-rpms_el6
> rbarry_ovirt-node-ng_master_check-merged-fc23-x86_64

> system_backup-jenkins-org
> system_restart-gerrit-service
these are yamlized iirc, if not, must be kept (and yamlized if possible)

> validate_open_sources_licenses_ovirt
> vdsm-jsonrpc-java_master_build-artifacts-fc22-x86_64
> vdsm-jsonrpc-java_master_check-patch-fc22-x86_64
> vdsm_master_create-rpms-el7-x86_64_no_spm_testing
> vdsm_master_storage_functional_tests_posix_gerrit
> vdsm_master_verify-error-codes_merged
> vdsm_master_virt_functional_tests_gerrit

> lago_master_dcaro-tests-check-merged-fc22-x86_64
> dcaro_test
> dcaro_test_dummy1
> dcaro_test_wf1
Anything with *dcaro*test* can be dropped

> fabiand_boo_build_testing
> All, no idea
> 
> 
> 
> -- 
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Yaml-ized jobs: Can they be unstable?

2016-04-25 Thread David Caro Estevez
With the standard ci is not possible, as the idea of 'unstable' and 'failure' 
is quite ambiguous and usually leads to confusion.

So we decided not to have that third state, runs either pass, or do not.

You can print/archive/log anything you want to allow you help debugging the 
issue, but on the ci side, the decision is to -1 or not a patch, so just two 
states.

Out of the standard you can define post-build scripts that can modify the state 
of the job (you can set it as failed, unstable or even pass a job that 
otherwise would be marked as failed).

Though for the reasons I exposed, I don't recommend that.

David Caro

El 25/4/2016 8:30 p. m., Fabian Deutsch <fdeut...@redhat.com> escribió:

Hey,

is there a way how a yamlized job can become unstable?

I'd liek to run some sanity node tests after a run, and mark a run as
unstable if this happens.
According to the jenkins docs, a job is unstable if one or more publishers fail.

Can this be achieved with yamlized jobs?

- fabian
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-497) [docs] Add documents on what's on the ci-tools repo and how it gets there

2016-04-21 Thread David Caro Estevez (oVirt JIRA)
David Caro Estevez created OVIRT-497:


 Summary: [docs] Add documents on what's on the ci-tools repo and 
how it gets there
 Key: OVIRT-497
 URL: https://ovirt-jira.atlassian.net/browse/OVIRT-497
 Project: oVirt - virtualization made easy
  Issue Type: Improvement
Reporter: David Caro Estevez
Assignee: infra






--
This message was sent by Atlassian JIRA
(v7.2.0-OD-05-030#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI fails

2016-04-20 Thread David Caro
On 04/20 10:44, Eyal Edri wrote:
> OK,
> But if we move to use git server instead of gerrit server (only for post
> merge), we won't use the Gerrit trigger plugin, so I don't see how its
> still relevant.
> Instead of using the gerrit trigger to run post merge jobs, we'll use the
> SCM plugin instead like a normal git.
> 
> Will this approach have any issues?

Well, no feedback on gerrit, no link to the gerrit change that caused it, and
only triggering on merges.

Though we already do that on some projects due to the high time they take to
run.

> 
> On Wed, Apr 20, 2016 at 10:41 AM, David Caro <dc...@redhat.com> wrote:
> 
> > On 04/20 09:40, David Caro wrote:
> > > On 04/20 10:25, Eyal Edri wrote:
> > > > On Wed, Apr 20, 2016 at 10:20 AM, Barak Korren <bkor...@redhat.com>
> > wrote:
> > > >
> > > > > On 20 April 2016 at 10:16, Eyal Edri <ee...@redhat.com> wrote:
> > > > > >
> > > > > >
> > > > > > On Wed, Apr 20, 2016 at 9:28 AM, Barak Korren <bkor...@redhat.com>
> > > > > wrote:
> > > > > >>
> > > > > >> > I'd try that approach first, though the mirror is a good idea
> > that
> > > > > will
> > > > > >> > probably have to be implemented anyhow once we start adding
> > slaves,
> > > > > >> > having real
> > > > > >> > info on the network usage/errors will give us insight to
> > actually
> > > > > >> > determine
> > > > > >> > what's the issue, and thus, what's the best solution.
> > > > > >> >
> > > > > >> > @infra what do you think?
> > > > > >> >
> > > > > >>
> > > > > >> The issue with mirroring is how can you make sure that you mirror
> > fast
> > > > > >> enough to enable CI. Even if Gerrit can push to the mirror on
> > patch
> > > > > >> submission, there will still be some time delta between the
> > submission
> > > > > >> happening (and the patch event showing up in Jenins) and the
> > mirror
> > > > > >> being synced. This looks like a nasty race condition.
> > > > > >> What the mirror essentially does is make sure that bits are copied
> > > > > >> from Amazom to PHX just once. I wonder if we can get the same
> > benefit
> > > > > >> with a simple HTTP proxy, how proxy-able is the Git HTTP protocol?
> > > > > >>
> > > > > >
> > > > > > I think we should prioritize mirroring the GIT (not gerrit) repos
> > to PHX,
> > > > > > this will help:
> > > > > >
> > > > > > Speed up all post merge jobs and reduce potential of errors from
> > git
> > > > > clone
> > > > > > (they will be in the same network)
> > > > > > Reduce load (?) from the gerrit server and perhaps reduce errors
> > of the
> > > > > per
> > > > > > patch jobs that will still run from gerrit.ovirt.org (AMAZON)
> > > > > > A longer goal will be either to migrate the gerrit server to PHX
> > or to
> > > > > find
> > > > > > away to properly mirror the gerrit server (but then i fear there
> > might be
> > > > > > race/problem as mentioned)
> > > > > >
> > > > >
> > > > > Please look at my comment about possible race conditions caused by
> > > > > mirroring. Simple mirroring may cause more trouble then its worth. We
> > > > > need to consider proxying instead.
> > > > >
> > > >
> > > > I don't see how a race condition can occur with a merge commit,
> > > > Can you elaborate?
> > >
> > >
> > > From the gerrit config on jenkins:
> > >
> > >
> > > Replication cache expiration time in minutes
> > >
> > > If one of the server supports replication events, these events are
> > cached in memory because they can be received before the build is triggered
> > and this plugin gets called to evaluate if the build can run. Cache allows
> > the plugin to look if the replication events were already received when it
> > gets called to evaluate if the build can run. If the time elapsed between
> > this plugin gets called and the time the build entered the queue is greated
> 

Re: CI fails

2016-04-20 Thread David Caro
On 04/20 09:40, David Caro wrote:
> On 04/20 10:25, Eyal Edri wrote:
> > On Wed, Apr 20, 2016 at 10:20 AM, Barak Korren <bkor...@redhat.com> wrote:
> > 
> > > On 20 April 2016 at 10:16, Eyal Edri <ee...@redhat.com> wrote:
> > > >
> > > >
> > > > On Wed, Apr 20, 2016 at 9:28 AM, Barak Korren <bkor...@redhat.com>
> > > wrote:
> > > >>
> > > >> > I'd try that approach first, though the mirror is a good idea that
> > > will
> > > >> > probably have to be implemented anyhow once we start adding slaves,
> > > >> > having real
> > > >> > info on the network usage/errors will give us insight to actually
> > > >> > determine
> > > >> > what's the issue, and thus, what's the best solution.
> > > >> >
> > > >> > @infra what do you think?
> > > >> >
> > > >>
> > > >> The issue with mirroring is how can you make sure that you mirror fast
> > > >> enough to enable CI. Even if Gerrit can push to the mirror on patch
> > > >> submission, there will still be some time delta between the submission
> > > >> happening (and the patch event showing up in Jenins) and the mirror
> > > >> being synced. This looks like a nasty race condition.
> > > >> What the mirror essentially does is make sure that bits are copied
> > > >> from Amazom to PHX just once. I wonder if we can get the same benefit
> > > >> with a simple HTTP proxy, how proxy-able is the Git HTTP protocol?
> > > >>
> > > >
> > > > I think we should prioritize mirroring the GIT (not gerrit) repos to 
> > > > PHX,
> > > > this will help:
> > > >
> > > > Speed up all post merge jobs and reduce potential of errors from git
> > > clone
> > > > (they will be in the same network)
> > > > Reduce load (?) from the gerrit server and perhaps reduce errors of the
> > > per
> > > > patch jobs that will still run from gerrit.ovirt.org (AMAZON)
> > > > A longer goal will be either to migrate the gerrit server to PHX or to
> > > find
> > > > away to properly mirror the gerrit server (but then i fear there might 
> > > > be
> > > > race/problem as mentioned)
> > > >
> > >
> > > Please look at my comment about possible race conditions caused by
> > > mirroring. Simple mirroring may cause more trouble then its worth. We
> > > need to consider proxying instead.
> > >
> > 
> > I don't see how a race condition can occur with a merge commit,
> > Can you elaborate?
> 
> 
> From the gerrit config on jenkins:
> 
> 
> Replication cache expiration time in minutes
> 
> If one of the server supports replication events, these events are cached in 
> memory because they can be received before the build is triggered and this 
> plugin gets called to evaluate if the build can run. Cache allows the plugin 
> to look if the replication events were already received when it gets called 
> to evaluate if the build can run. If the time elapsed between this plugin 
> gets called and the time the build entered the queue is greated than the 
> cache expiration time, the plugin will assume that replication events were 
> received and will let the build run.
> 
> Changing this value will only take effect when Jenkins is restarted 
> 


And from the specific server options:

Block builds in the queue until the replication events for the configured 
Gerrit slave(s) are received.

> 
> > 
> > 
> > 
> > 
> > -- 
> > Eyal Edri
> > Associate Manager
> > RHEV DevOps
> > EMEA ENG Virtualization R
> > Red Hat Israel
> > 
> > phone: +972-9-7692018
> > irc: eedri (on #tlv #rhev-dev #rhev-integ)
> 
> -- 
> David Caro
> 
> Red Hat S.L.
> Continuous Integration Engineer - EMEA ENG Virtualization R
> 
> Tel.: +420 532 294 605
> Email: dc...@redhat.com
> IRC: dcaro|dcaroest@{freenode|oftc|redhat}
> Web: www.redhat.com
> RHT Global #: 82-62605



-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI fails

2016-04-20 Thread David Caro
On 04/20 10:25, Eyal Edri wrote:
> On Wed, Apr 20, 2016 at 10:20 AM, Barak Korren <bkor...@redhat.com> wrote:
> 
> > On 20 April 2016 at 10:16, Eyal Edri <ee...@redhat.com> wrote:
> > >
> > >
> > > On Wed, Apr 20, 2016 at 9:28 AM, Barak Korren <bkor...@redhat.com>
> > wrote:
> > >>
> > >> > I'd try that approach first, though the mirror is a good idea that
> > will
> > >> > probably have to be implemented anyhow once we start adding slaves,
> > >> > having real
> > >> > info on the network usage/errors will give us insight to actually
> > >> > determine
> > >> > what's the issue, and thus, what's the best solution.
> > >> >
> > >> > @infra what do you think?
> > >> >
> > >>
> > >> The issue with mirroring is how can you make sure that you mirror fast
> > >> enough to enable CI. Even if Gerrit can push to the mirror on patch
> > >> submission, there will still be some time delta between the submission
> > >> happening (and the patch event showing up in Jenins) and the mirror
> > >> being synced. This looks like a nasty race condition.
> > >> What the mirror essentially does is make sure that bits are copied
> > >> from Amazom to PHX just once. I wonder if we can get the same benefit
> > >> with a simple HTTP proxy, how proxy-able is the Git HTTP protocol?
> > >>
> > >
> > > I think we should prioritize mirroring the GIT (not gerrit) repos to PHX,
> > > this will help:
> > >
> > > Speed up all post merge jobs and reduce potential of errors from git
> > clone
> > > (they will be in the same network)
> > > Reduce load (?) from the gerrit server and perhaps reduce errors of the
> > per
> > > patch jobs that will still run from gerrit.ovirt.org (AMAZON)
> > > A longer goal will be either to migrate the gerrit server to PHX or to
> > find
> > > away to properly mirror the gerrit server (but then i fear there might be
> > > race/problem as mentioned)
> > >
> >
> > Please look at my comment about possible race conditions caused by
> > mirroring. Simple mirroring may cause more trouble then its worth. We
> > need to consider proxying instead.
> >
> 
> I don't see how a race condition can occur with a merge commit,
> Can you elaborate?


From the gerrit config on jenkins:


Replication cache expiration time in minutes

If one of the server supports replication events, these events are cached in 
memory because they can be received before the build is triggered and this 
plugin gets called to evaluate if the build can run. Cache allows the plugin to 
look if the replication events were already received when it gets called to 
evaluate if the build can run. If the time elapsed between this plugin gets 
called and the time the build entered the queue is greated than the cache 
expiration time, the plugin will assume that replication events were received 
and will let the build run.

Changing this value will only take effect when Jenkins is restarted 


> 
> 
> 
> 
> -- 
> Eyal Edri
> Associate Manager
> RHEV DevOps
> EMEA ENG Virtualization R
> Red Hat Israel
> 
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: CI fails

2016-04-19 Thread David Caro
On 04/19 20:13, Yevgeny Zaspitsky wrote:
> http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/44920/ :
> There was an infra issue, please contact infra@ovirt.org
> 
> From looking into the job log it appears that git failed fetching the
> updates from the server.
> That isn't the first time a git problem appears on the Jenkins CI nodes -
> similar failure happened on another my patch today.
> Is there a way to improve git communication stability on the Jenkins CI
> nodes?

Yep, it timed out:

   17:48:28 ERROR: Timeout after 10 minutes

Currently our gerrit server is on amazon, and the jenkins slaves are at
phoenix that sometimes has network issues.

It might be possible to try to add a gerrit mirror locally at phoenix though
it's not trivial.

I see that the speed is <50K/s, that's actually really slow, probably a network
issue on phx, worth taking a look there too... we still have to add proper
monitoring to the network, but we are on our way.

I'd try that approach first, though the mirror is a good idea that will
probably have to be implemented anyhow once we start adding slaves, having real
info on the network usage/errors will give us insight to actually determine
what's the issue, and thus, what's the best solution.

@infra what do you think?

> 
> Regards,
> Yevgeny

> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Ovirt installation on fedora 23

2016-04-19 Thread David Caro
On 04/19 04:47, Guy Chen wrote:
> 
> I am trying to install ovirt 3.6 on fedora 23, i have installed according to 
> the site http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm and yum 
> repository was installed.
> Then when trying to install ovirt-engine getting error message :
> "No package ovirt-engine available"
> 
> From looking the yum repository, the mirrolist path that is enabled does not 
> exists, and baseurl under fc23 there are no RPM's :
> 
> #baseurl=http://resources.ovirt.org/pub/ovirt-3.6/rpm/fc$releasever/
> mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.6-fc$releasever
> 
> How can the ovirt be installed on fedora 23 ?

Well, ovirt 3.6 is not supported on fc23, only fc22 right now, though you can
try installing the fc22 rpms there (you might find nasty errors though).

For fedora 23 you can try the latest master, but it's right now a bit unstable
as a lot of features and improvements are going in.

I'd recommend trying out running it in a vm at first, probably with centos if
all you want to do is play around with it, for example using ovirt live:

* oVirt live: http://www.ovirt.org/download/ovirt-live/


And if you feel adventurous, I can help you setup a full virtual environment
with multiple hosts and nfs+iscsi storages using lago:

   https://github.com/lago-project/lago

Though might require a bit more effort

> 
> Thanks in advance
> ___
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-494) Master broken again - networking tests

2016-04-18 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14912#comment-14912
 ] 

David Caro Estevez commented on OVIRT-494:
--

No prob, thanks for reporting!

> Master broken again - networking tests
> --
>
> Key: OVIRT-494
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-494
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Martin Sivak
>Assignee: infra
>
> Hi,
> I just found out that I can't build the master branch
> (d31dfb3b0492af708bb303c82bddf1a51e5d9aa0)  because of
> Tests in error:
>   
> testGetViolationMessage(org.ovirt.engine.core.bll.validator.network.LegacyNetworkExclusivenessValidatorTest):
> NETWORK_INTERFACES_NOT_EXCLUSIVELY_USED_BY_NETWORK
> Martin



--
This message was sent by Atlassian JIRA
(v7.2.0-OD-05-030#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[JIRA] (OVIRT-494) Master broken again - networking tests

2016-04-18 Thread David Caro Estevez (oVirt JIRA)

[ 
https://ovirt-jira.atlassian.net/browse/OVIRT-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14908#comment-14908
 ] 

David Caro Estevez commented on OVIRT-494:
--

Soo, then what you are asking is to find out which patch was the one that 
introduced the error?

> Master broken again - networking tests
> --
>
> Key: OVIRT-494
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-494
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Martin Sivak
>Assignee: infra
>
> Hi,
> I just found out that I can't build the master branch
> (d31dfb3b0492af708bb303c82bddf1a51e5d9aa0)  because of
> Tests in error:
>   
> testGetViolationMessage(org.ovirt.engine.core.bll.validator.network.LegacyNetworkExclusivenessValidatorTest):
> NETWORK_INTERFACES_NOT_EXCLUSIVELY_USED_BY_NETWORK
> Martin



--
This message was sent by Atlassian JIRA
(v7.2.0-OD-05-030#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Experimental Jenkins monitoring

2016-04-18 Thread David Caro
On 04/17 23:55, Nadav Goldin wrote:
> >
> > I think that will change a lot per-project basis, if we can get that info
> > per
> > job, with grafana then we can aggregate and create secondary stats (like
> > bilds
> > per hour as you say).
> > So I'd say just to collect the 'bare' data, like job built event, job
> > ended,
> > duration and such.
> 
> agree. will need to improve that, right now it 'pulls' each X seconds via
> the CLI,
> instead of Jenkins sending the events, so it is limited to what the CLI can
> provide and not that efficient. I plan to install [1] and do the opposite
> (Jenkins will send a POST request with the data on each build
> event and then it would be sent to graphite)

Amarchuk had already some ideas on integrating collectd with jenkins, imo that
will work well for 'master' related stats and more difficult for others like
job started, etc. but worth looking at it

> 
> Have you checked the current ds fabric checks?
> > There are already a bunch of fabric tasks that monitor jenkins, if we
> > install
> > the nagiosgraph (see ds for details) to send the nagios performance data
> > into
> > graphite, we can use them as is to also start alarms and such
> >
> Icinga2 has integrated graphite support, so after the upgrade we will
> get all of our alarms data sent to graphite 'out-of-the-box'.

+1!

> 
> >
> > dcaro@akhos$ fab -l | grep nagi
> > do.jenkins.nagios.check_build_load  Checks if the
> > bui...
> > do.jenkins.nagios.check_executors   Checks if the
> > exe...
> > do.jenkins.nagios.check_queue   Check if the
> > buil...
> > do.provision.nagios_check   Show a summary
> > of...
> >
> > Though those will not give you the bare data (were designed with nagios in
> > mind, not graphite so they are just checks, the stats were added later)
> >
> > There's also a bunch of helpers functions to create nagios checks too.
> >
> 
> cool, wasn't aware of those fabric checks.
> I think for simple metrics(loads and such) we could use that(i.e. query
> Jenkins from fabric)
> but for more complicated queries we'd need to query graphite itself,
> with this[2] I could create scripts that query graphite and trigger Icinga
> alerts.
> such as: calculate the 'expected' slaves load for the next hour(in graphite)
> and then:
> Icinga queries graphite -> triggers another Icinga alert -> triggers custom
> script(such as
> fab task to create slaves)

I'd be careful with the reactions for now, but yes, that's great.

> 
> for now, added two more metrics: top 10 jobs in past X time, and
> avg number of builds running / builds waiting in queue in the past X time.
> some metrics might 'glitch' from time to time as there is not a lot of data
> yet
> and it mainly counts integer values while graphite is oriented towards
> floats, so the data has to be smoothed(usually with movingAverage())
> 
> 
> 
> [1]
> https://wiki.jenkins-ci.org/display/JENKINS/Statistics+Notification+Plugin
> [2] https://github.com/klen/graphite-beacon
> 
> On Fri, Apr 15, 2016 at 9:39 AM, David Caro <dc...@redhat.com> wrote:
> 
> > On 04/15 01:24, Nadav Goldin wrote:
> > > Hi,
> > > I've created an experimental dashboard for Jenkins at our Grafana
> > instance:
> > > http://graphite.phx.ovirt.org/dashboard/db/jenkins-monitoring
> > > (if you don't have an account, you can enrol with github/google)
> >
> > Nice! \o/
> >
> > >
> > > currently it collects the following metrics:
> > > 1) How many jobs in the Build Queue are waiting per slaves' label:
> > >
> > > for instance: if there are 4 builds of a job that is restricted to 'el7'
> > > and 2 builds of another job
> > > which is restricted to 'el7' in the build queue we will see 6 for 'el7'
> > in
> > > the first graph.
> > > 'No label' sums jobs which are waiting but are unrestricted.
> > >
> > > 2) How many slaves are idle per label.
> > > note that the slave's labels are contained in the job's labels, but not
> > > vice versa, as
> > > we allow regex expressions such as (fc21 || fc22 ). right now it treats
> > > them as simple
> > > strings.
> > >
> > > 3) Total number of online/offline/idle slaves
> > >
> > > besides the normal monitoring, it can help us:
> > > 1) minimize the difference between 'idle' slaves per label and jobs
> > waiting
> > > in the build queue per label.
>

[JIRA] (OVIRT-491) Request to verify

2016-04-15 Thread David Caro Estevez (oVirt JIRA)

 [ 
https://ovirt-jira.atlassian.net/browse/OVIRT-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Caro Estevez updated OVIRT-491:
-
Assignee: David Caro Estevez  (was: infra)
  Status: Accepted  (was: New)

I'm currently running it on lago locally, I think [~landgraf] is also running 
it somewhere else.

> Request to verify
> -
>
> Key: OVIRT-491
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-491
> Project: oVirt - virtualization made easy
>  Issue Type: By-EMAIL
>Reporter: Piotr Kliczewski
>    Assignee: David Caro Estevez
>
> Can you please verify a patch [1] whether it has any influence on add host
> issue?
> Thanks,
> Piotr
> [1] https://gerrit.ovirt.org/56178



--
This message was sent by Atlassian JIRA
(v7.2.0-OD-05-030#72002)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Experimental Jenkins monitoring

2016-04-15 Thread David Caro
On 04/15 01:24, Nadav Goldin wrote:
> Hi,
> I've created an experimental dashboard for Jenkins at our Grafana instance:
> http://graphite.phx.ovirt.org/dashboard/db/jenkins-monitoring
> (if you don't have an account, you can enrol with github/google)

Nice! \o/

> 
> currently it collects the following metrics:
> 1) How many jobs in the Build Queue are waiting per slaves' label:
> 
> for instance: if there are 4 builds of a job that is restricted to 'el7'
> and 2 builds of another job
> which is restricted to 'el7' in the build queue we will see 6 for 'el7' in
> the first graph.
> 'No label' sums jobs which are waiting but are unrestricted.
> 
> 2) How many slaves are idle per label.
> note that the slave's labels are contained in the job's labels, but not
> vice versa, as
> we allow regex expressions such as (fc21 || fc22 ). right now it treats
> them as simple
> strings.
> 
> 3) Total number of online/offline/idle slaves
> 
> besides the normal monitoring, it can help us:
> 1) minimize the difference between 'idle' slaves per label and jobs waiting
> in the build queue per label.
> this might be caused by unnecessary restrictions on the label, or maybe by
> the
> 'Throttle Concurrent Builds' plugin.
> 2) decide how many VMs and which OS to install on the new hosts.
> 3) in the future, once we have the 'slave pools' implemented, we could
> implement
> auto-scaling based on thresholds or some other function.
> 
> 
> 'experimental' - as it still needs to be tested for stability(it is based
> on python-jenkins
> and graphite-send) and also more metrics can be added(maybe avg running time
> per job? builds per hour? ) - will be happy to hear.

I think that will change a lot per-project basis, if we can get that info per
job, with grafana then we can aggregate and create secondary stats (like bilds
per hour as you say).
So I'd say just to collect the 'bare' data, like job built event, job ended,
duration and such.

> 
> I plan later to pack it all into independent fabric tasks(i.e. fab
> do.jenkins.slaves.show)

Have you checked the current ds fabric checks?
There are already a bunch of fabric tasks that monitor jenkins, if we install
the nagiosgraph (see ds for details) to send the nagios performance data into
graphite, we can use them as is to also start alarms and such.

dcaro@akhos$ fab -l | grep nagi
do.jenkins.nagios.check_build_load  Checks if the bui...
do.jenkins.nagios.check_executors   Checks if the exe...
do.jenkins.nagios.check_queue   Check if the buil...
do.provision.nagios_check   Show a summary of...

Though those will not give you the bare data (were designed with nagios in
mind, not graphite so they are just checks, the stats were added later)

There's also a bunch of helpers functions to create nagios checks too.


> 
> 
> Nadav

> _______
> Infra mailing list
> Infra@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra


-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R

Tel.: +420 532 294 605
Email: dc...@redhat.com
IRC: dcaro|dcaroest@{freenode|oftc|redhat}
Web: www.redhat.com
RHT Global #: 82-62605


signature.asc
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


  1   2   3   4   5   6   7   8   >