[oVirt Jenkins] ovirt-engine_master_animal-sniffer_merged - Build # 7306 - Failure!

2014-06-24 Thread Jenkins ci oVirt Server
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/7306/
Build Number: 7306
Build Status:  Failure
Triggered By: Triggered by Gerrit: http://gerrit.ovirt.org/29106

-
Changes Since Last Success:
-
Changes for Build #7306
[Roy Golan] core: CDI workaournd jboss 7.1.1 bug




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [oVirt Jenkins] ovirt-engine_master_animal-sniffer_merged - Build # 7306 - Failure!

2014-06-24 Thread Roy Golan

On 06/24/2014 09:18 AM, Jenkins ci oVirt Server wrote:

Project: http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/7306/
Build Number: 7306
Build Status:  Failure
Triggered By: Triggered by Gerrit: http://gerrit.ovirt.org/29106

-
Changes Since Last Success:
-
Changes for Build #7306
[Roy Golan] core: CDI workaournd jboss 7.1.1 bug




-
Failed Tests:
-
No tests ran.



build is failing because ovirt-host-deploy needs refresh using -U . 
alonbl updates its artifacts just now so it should take a while to refresh

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Proposal for hosting our own set of modules

2014-06-24 Thread Ewoud Kohl van Wijngaarden
On Mon, Jun 23, 2014 at 06:02:51PM +0200, Michael Scherer wrote:
 Le lundi 23 juin 2014 à 16:42 +0200, Ewoud Kohl van Wijngaarden a
 écrit :
  On Mon, Jun 23, 2014 at 04:30:55PM +0200, Michael Scherer wrote:
   Hi,
   
   so following the discussion on puppet module, I would propose that we
   create a new infra-puppet-modules module on gerrit. 
   
   The division between this module and the current one would be like
   this :
   
   infra-puppet would hold the manifests, the site, the Puppetfile , etc
   
   infra-puppet-modules would hold directories, 1 per module we develop
   (not the external one we use, they would still be on github or anywhere,
   pulled by librarian-puppet) .
   
   Since we are using librarian-puppet
   ( https://github.com/rodjek/librarian-puppet ) and r10k
  
  We're only using r10k, not librarian-puppet.
 
 Indeed. 
 
 But in the end, we still use Puppetfile nonetheless, no ?

Correct, but librarian handles dependencies where r10k ignores them.
That means you must manually specificy any dependency.

   ( https://github.com/adrienthebo/r10k ) , it requires use to have the
   modules/ directory to be managed by librarian-puppet. In turn we need to
   have the modules in git, we can address them by path: 
   
 mod puppetlabs/apt,
   :git = git://github.com/fake/puppet-modules.git,
   :path = modules/apt
   
   
   This wouldn't requires to change much, besides adding the module to 
   Puppetfile and creating a git repository.
   
   If no one disagree, I will request the git repository.
   
   ( in the mean time, i did create a sample awstats repository for 
   stats.ovirt.org, so we can have something to push )
  
  Unless we plan to make them reusable for other projects, I don't see the
  benefit. If we do plan to make them reusable, we should IMHO also
  publish them on the forge.
  
  Another potential issue is how we decide when to deploy. We could have a
  specific commit ID and update our Puppetfile every time, but again,
  little benefit over having them in one tree.
 
 I was under the impression you could just give a tag and so use master ?
 ( and using branch for development )

You can, but how do we know we want to update and to which version?
branches can easily break compatibility and it will bite you at some
point.

  In case you're unaware, we already load modules/* (which is managed by
  r10k) and site/* (tracked in git). That means we can host our modules in
  site and spit off when they are reusable.
 
 It was not very obvious that site/* was for modules, indeed :)
 
 But my fault, I should also have read the doc in the git repository who
 clearly say that.
 
 I must also say that using r10k for different environments seems a bit
 overkill, as we seem to only have 1 single environment anyway ( but I am
 not using r10k usually, as I do not have enough systems for that and
 write all stuff myself ). 

The alternative is either include them or git submodules. I'm not a fan
of the former and I've seen people have issues with the latter where
people accidentally commit an older version. It's also hard to spot in
reviews because you only see two git commit IDs with no idea if it's an
upgrade or downgrade. That's why a Puppetfile-based solution was chosen.

Minor nitpick: technically we have 2 environments and we should make an
effort to remove master. This is done by changing the default branch to
production and then remove master. The only way I know is logging into
gerrit through SSH and changing infra-puppet.git/HEAD to point to
production instead of master. Then you can use the gerrit UI to delete
the master branch.

  And if we do plan on creating modules repos, I'd be in favor of having
  one git repo per puppet module since that is what most people would
  expect.
 
 One git repo per module is a bit annoying when we are planning to write
 a lot of modules. But I guess it all depend on the time it take to
 create one. I would definitely prefer a approach of 1 big git as long as
 we are growing fast and maybe split later, since it permit faster
 growth ?

That's exactly what I was thinking.
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


ovirt-engine compilation errors

2014-06-24 Thread Moti Asayag
Hi,

The ovirt-engine jobs fails (i.e. [1]) due an issue with ovirt-host-deploy jar
missing classes.

It could be solved by purging the folder:
$M2_REPOSITORY/org/ovirt/ovirt-host-deploy/ovirt-host-deploy/1.3.0-master-SNAPSHOT/

which will cause maven to download a proper version of that artifact.

It is probably required to be executed on each server which executes
the compilation job.

[1] 
http://jenkins.ovirt.org/job/ovirt_engine_master_compile_checkstyle_gerrit/27599/console

Thanks,
Moti
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-engine compilation errors

2014-06-24 Thread Ohad Basan
it looks like the maven-metadata.xml file is outdated and doesn't contain the 
latest ovirt-host-deploy artifact
http://artifactory.ovirt.org:8081/artifactory/ovirt-mirror/org/ovirt/ovirt-host-deploy/ovirt-host-deploy/1.3.0-master-SNAPSHOT/

it has to be regenerated.
anyone has permissions to do that?


- Original Message -
 From: Moti Asayag masa...@redhat.com
 To: infra infra@ovirt.org
 Sent: Tuesday, June 24, 2014 3:24:53 PM
 Subject: ovirt-engine compilation errors
 
 Hi,
 
 The ovirt-engine jobs fails (i.e. [1]) due an issue with ovirt-host-deploy
 jar
 missing classes.
 
 It could be solved by purging the folder:
 $M2_REPOSITORY/org/ovirt/ovirt-host-deploy/ovirt-host-deploy/1.3.0-master-SNAPSHOT/
 
 which will cause maven to download a proper version of that artifact.
 
 It is probably required to be executed on each server which executes
 the compilation job.
 
 [1]
 http://jenkins.ovirt.org/job/ovirt_engine_master_compile_checkstyle_gerrit/27599/console
 
 Thanks,
 Moti
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra
 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: ovirt-engine compilation errors

2014-06-24 Thread Moti Asayag


- Original Message -
 From: Ohad Basan oba...@redhat.com
 To: Moti Asayag masa...@redhat.com
 Cc: infra infra@ovirt.org
 Sent: Tuesday, June 24, 2014 3:47:04 PM
 Subject: Re: ovirt-engine compilation errors
 
 it looks like the maven-metadata.xml file is outdated and doesn't contain the
 latest ovirt-host-deploy artifact
 http://artifactory.ovirt.org:8081/artifactory/ovirt-mirror/org/ovirt/ovirt-host-deploy/ovirt-host-deploy/1.3.0-master-SNAPSHOT/
 
 it has to be regenerated.
 anyone has permissions to do that?
 
 

Isn't there an option just to delete the content of that folder so the 
artifacts will
be retrieved from maven-central ? 

 - Original Message -
  From: Moti Asayag masa...@redhat.com
  To: infra infra@ovirt.org
  Sent: Tuesday, June 24, 2014 3:24:53 PM
  Subject: ovirt-engine compilation errors
  
  Hi,
  
  The ovirt-engine jobs fails (i.e. [1]) due an issue with ovirt-host-deploy
  jar
  missing classes.
  
  It could be solved by purging the folder:
  $M2_REPOSITORY/org/ovirt/ovirt-host-deploy/ovirt-host-deploy/1.3.0-master-SNAPSHOT/
  
  which will cause maven to download a proper version of that artifact.
  
  It is probably required to be executed on each server which executes
  the compilation job.
  
  [1]
  http://jenkins.ovirt.org/job/ovirt_engine_master_compile_checkstyle_gerrit/27599/console
  
  Thanks,
  Moti
  ___
  Infra mailing list
  Infra@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/infra
  
 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_animal-sniffer_merged - Build # 7315 - Fixed!

2014-06-24 Thread Jenkins ci oVirt Server
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/7315/
Build Number: 7315
Build Status:  Fixed
Triggered By: Triggered by Gerrit: http://gerrit.ovirt.org/28832

-
Changes Since Last Success:
-
Changes for Build #7306
[Roy Golan] core: CDI workaournd jboss 7.1.1 bug


Changes for Build #7307
[Karnan] webadmin:Tasks with longer description overlaps date/time fixed


Changes for Build #7308
[Roy Golan] core: use CDI version 1.0-SP4


Changes for Build #7309
[Tal Nisan] core: Added alert on number of LVs config value


Changes for Build #7310
[Sandro Bonazzola] packaging: setup: re-enable app mode selection


Changes for Build #7311
[Daniel Erez] webadmin: AsyncDataProvider - removed unused methods


Changes for Build #7312
[Alexander Wels] userportal,webadmin: Revert Tooltip infrastructure


Changes for Build #7313
[Arik Hadas] core: remove VmPoolDAO#getVmPoolMapByVmGuid


Changes for Build #7314
[Daniel Erez] core: StorageDomain - containsUnregisteredEntities flag


Changes for Build #7315
[Daniel Erez] webadmin: data domain - hide import tabs when not relevant




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


artifactory problem is resolved

2014-06-24 Thread Ohad Basan
Hello

the artifactory problem we had today (fetching the wrong host deploy) should be 
fixed
send an email if you hit more problems

thanks,

Ohad
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_animal-sniffer_merged - Build # 7316 - Failure!

2014-06-24 Thread Jenkins ci oVirt Server
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/7316/
Build Number: 7316
Build Status:  Failure
Triggered By: Triggered by Gerrit: http://gerrit.ovirt.org/28965

-
Changes Since Last Success:
-
Changes for Build #7316
[Ravi Nori] tools : Upgrades from 3.5 on should look for Command Coordinator 
related changes to Aysc Tasks




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


INFRA ISSUE: [oVirt Jenkins] ovirt-engine_3.4_upgrade-from-3.3_merged - Build # 293 - Failure!

2014-06-24 Thread Jenkins ci oVirt Server
Project: http://jenkins.ovirt.org/job/ovirt-engine_3.4_upgrade-from-3.3_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_3.4_upgrade-from-3.3_merged/293/
Build Number: 293
Build Status:  Failure
Triggered By: Triggered by Gerrit: http://gerrit.ovirt.org/28847

-
Changes Since Last Success:
-
Changes for Build #293
[Juan Hernandez] restapi: IO exception mapper

[David Caro] Added first mock job and some improvements




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Update: artifactory problem is resolved (incident report)

2014-06-24 Thread Eyal Edri
Hi,

Today jenkins.ovirt.org had suffered a major incident which occured due to bad 
syncing from maven sonatype.
it all started from the jboss commit core: CDI workaournd jboss 7.1.1 bug, 
which was fixed later by roy,
but for some reason (might be sonatype repo deleting ovirt artifacts), we were 
left with outdated jars in all jenkins local
repos and on artifactory.ovirt.org.

roy sent this morning how it should be resolved: 
 - build is failing because ovirt-host-deploy needs refresh using -U .
   alonbl updates its artifacts just now so it should take a while to refresh

but it didn't work.

also didn't work:
  - cleaning cache from artifactory
  - deleting local maven repos from jenkins jobs

what did work finally:
 - deleting the sonatype from artifactory.ovirt.org and creating it again, plug 
updating ovirt-mirror virtual as well.
 - some jobs might need to be updated/deleted local WS manually still - please 
report to infra on jobs that still fail.

we probably need to think how to handle such incidents in the future (if 
sonatype continues to delete our jars...).
any suggestions will be gladly considered.

oVirt infra team.


- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: Ohad Basan oba...@redhat.com, infra infra@ovirt.org
 Sent: Tuesday, June 24, 2014 6:22:35 PM
 Subject: Re: artifactory problem is resolved
 
 Il 24/06/2014 17:10, Ohad Basan ha scritto:
  Hello
  
  the artifactory problem we had today (fetching the wrong host deploy)
  should be fixed
  send an email if you hit more problems
 
 http://jenkins.ovirt.org/job/ovirt-engine_master_create-rpms-quick_gerrit/2804/label=fedora20/console
 
  
  thanks,
  
  Ohad
  ___
  Infra mailing list
  Infra@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/infra
  
 
 
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra
 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[oVirt Jenkins] ovirt-engine_master_animal-sniffer_merged - Build # 7317 - Fixed!

2014-06-24 Thread Jenkins ci oVirt Server
Project: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/ 
Build: 
http://jenkins.ovirt.org/job/ovirt-engine_master_animal-sniffer_merged/7317/
Build Number: 7317
Build Status:  Fixed
Triggered By: Started by user eyal edri

-
Changes Since Last Success:
-
Changes for Build #7316
[Ravi Nori] tools : Upgrades from 3.5 on should look for Command Coordinator 
related changes to Aysc Tasks


Changes for Build #7317
[Ravi Nori] tools : Upgrades from 3.5 on should look for Command Coordinator 
related changes to Aysc Tasks




-
Failed Tests:
-
No tests ran. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Proposal for hosting our own set of modules

2014-06-24 Thread Michael Scherer
Le mardi 24 juin 2014 à 09:59 +0200, Ewoud Kohl van Wijngaarden a
écrit :
 On Mon, Jun 23, 2014 at 06:02:51PM +0200, Michael Scherer wrote:
  Le lundi 23 juin 2014 à 16:42 +0200, Ewoud Kohl van Wijngaarden a
  écrit :
   On Mon, Jun 23, 2014 at 04:30:55PM +0200, Michael Scherer wrote:

( https://github.com/adrienthebo/r10k ) , it requires use to have the
modules/ directory to be managed by librarian-puppet. In turn we need to
have the modules in git, we can address them by path: 

  mod puppetlabs/apt,
:git = git://github.com/fake/puppet-modules.git,
:path = modules/apt


This wouldn't requires to change much, besides adding the module to 
Puppetfile and creating a git repository.

If no one disagree, I will request the git repository.

( in the mean time, i did create a sample awstats repository for 
stats.ovirt.org, so we can have something to push )
   
   Unless we plan to make them reusable for other projects, I don't see the
   benefit. If we do plan to make them reusable, we should IMHO also
   publish them on the forge.
   
   Another potential issue is how we decide when to deploy. We could have a
   specific commit ID and update our Puppetfile every time, but again,
   little benefit over having them in one tree.
  
  I was under the impression you could just give a tag and so use master ?
  ( and using branch for development )
 
 You can, but how do we know we want to update and to which version?
 branches can easily break compatibility and it will bite you at some
 point.

It depend on the release model of our own modules. We can have a policy
of master should be deployable and forever retro compatible but once
expressed like this, it doesn't sound sustainable (even if that's what
we would do with a single git repository ). 


   In case you're unaware, we already load modules/* (which is managed by
   r10k) and site/* (tracked in git). That means we can host our modules in
   site and spit off when they are reusable.
  
  It was not very obvious that site/* was for modules, indeed :)
  
  But my fault, I should also have read the doc in the git repository who
  clearly say that.
  
  I must also say that using r10k for different environments seems a bit
  overkill, as we seem to only have 1 single environment anyway ( but I am
  not using r10k usually, as I do not have enough systems for that and
  write all stuff myself ). 
 
 The alternative is either include them or git submodules. I'm not a fan
 of the former and I've seen people have issues with the latter where
 people accidentally commit an older version. It's also hard to spot in
 reviews because you only see two git commit IDs with no idea if it's an
 upgrade or downgrade. That's why a Puppetfile-based solution was chosen.

I am not fan of git submodule either. But the Puppetfile file part is
good, it just the multiple env that I found weird. Now, if that's
unavoidable, no problem.


 Minor nitpick: technically we have 2 environments and we should make an
 effort to remove master. This is done by changing the default branch to
 production and then remove master. The only way I know is logging into
 gerrit through SSH and changing infra-puppet.git/HEAD to point to
 production instead of master. Then you can use the gerrit UI to delete
 the master branch.

As we discussed on IRC later, I would be +1 for this, provided that this
doesn't break too much assumption in docs ( especially gerrit doc around
). But as we discussed too, maybe we should just push to use git-review.

   And if we do plan on creating modules repos, I'd be in favor of having
   one git repo per puppet module since that is what most people would
   expect.
  
  One git repo per module is a bit annoying when we are planning to write
  a lot of modules. But I guess it all depend on the time it take to
  create one. I would definitely prefer a approach of 1 big git as long as
  we are growing fast and maybe split later, since it permit faster
  growth ?
 
 That's exactly what I was thinking.

This also bring the question of using external module vs using our own.
I am more the kind of guy that write his own (while I am not the kind of
guy who write his own code), mostly because I started to use puppet
before the forge was up, and because I did have very precise ideas on
what I wanted to achieve, but I am not against using stuff if that's the
best practices :)

I would however make sure we have proper guidelines on when we take a
module, as I am not sure they are all equally good :/

-- 
Michael Scherer
Open Source and Standards, Sysadmin





signature.asc
Description: This is a digitally signed message part
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra