Re: findbugs fails with no reason

2015-06-16 Thread Omer Frenkel


- Original Message -
 From: Eyal Edri ee...@redhat.com
 To: Omer Frenkel ofren...@redhat.com
 Cc: infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 3:35:15 PM
 Subject: Re: findbugs fails with no reason
 
 the findbugs yells on vdsdeploy file, and the patch did alter that file,
 so if you believe that's false positive, you will need to update the xml
 filter file
 to ignore it.
 
 http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/39057/findbugsResult/
 
 VdsDeploy.java:720, DLS_DEAD_LOCAL_STORE, Priority: Low
 Dead store to cer in
 org.ovirt.engine.core.bll.hostdeploy.VdsDeploy.processEvent(Event$Base)
 
 This instruction assigns a value to a local variable, but the value is not
 read or used in any subsequent instruction. Often, this indicates an error,
 because the value computed is never used.
 
 Note that Sun's javac compiler often generates dead stores for final local
 variables. Because FindBugs is a bytecode-based tool, there is no easy way
 to eliminate these false positives.
 

this is what this patch is fixing
i think you are looking on the wrong result

jenkins fails at
http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/39053/

 
 
 - Original Message -
  From: Omer Frenkel ofren...@redhat.com
  To: infra infra@ovirt.org
  Sent: Tuesday, June 16, 2015 3:09:58 PM
  Subject: findbugs fails with no reason
  
  can someone please check
  https://gerrit.ovirt.org/#/c/42422/
  Thanks
  ___
  Infra mailing list
  Infra@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/infra
  
  
  
 
 --
 Eyal Edri
 Supervisor, RHEV CI
 EMEA ENG Virtualization RD
 Red Hat Israel
 
 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


findbugs fails with no reason

2015-06-16 Thread Omer Frenkel
can someone please check
https://gerrit.ovirt.org/#/c/42422/
Thanks
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: findbugs fails with no reason

2015-06-16 Thread Eyal Edri
ok,
the job fails due to network error / accessing maven repo (not related to 
findbugs):

13:06:28 [INFO] Finished at: Tue Jun 16 12:06:27 UTC 2015
13:06:29 [INFO] Final Memory: 293M/851M
13:06:29 [INFO] 

13:06:29 [ERROR] Error resolving version for plugin 
'org.apache.maven.plugins:maven-eclipse-plugin' from the repositories [local 
(/home/jenkins/workspace/ovirt-engine_master_find-bugs_gerrit/.repository), 
ovirt-maven-repository 
(http://artifactory.ovirt.org:8081/artifactory/ovirt-mirror)]: Plugin not found 
in any plugin repository - [Help 1]
13:06:29 [ERROR] 
13:06:29 [ERROR] To see the full stack trace of the errors, re-run Maven with 
the -e switch.
13:06:29 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
13:06:29 [ERROR] 
13:06:29 [ERROR] For more information about the errors and possible solutions, 
please read the following articles:
13:06:29 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginVersionResolutionException
13:06:29 Build step 'Invoke top-level Maven targets' marked build as failure
13:06:29 [FINDBUGS] Skipping publisher since build result is FAILURE
13:06:30 Build step 'Groovy Postbuild' changed build result to UNSTABLE
13:06:30 Finished: UNSTABLE

you can try to rerun it, if it fails again we can try also deleting the 
artifactory cache, might be
maven central blocked it temporarily due to many requests.

i also changed something in the job that should help in speeding it up and keep 
local cache of such artifacts.

- Original Message -
 From: Omer Frenkel ofren...@redhat.com
 To: Eyal Edri ee...@redhat.com
 Cc: infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 3:41:41 PM
 Subject: Re: findbugs fails with no reason
 
 
 
 - Original Message -
  From: Eyal Edri ee...@redhat.com
  To: Omer Frenkel ofren...@redhat.com
  Cc: infra infra@ovirt.org
  Sent: Tuesday, June 16, 2015 3:35:15 PM
  Subject: Re: findbugs fails with no reason
  
  the findbugs yells on vdsdeploy file, and the patch did alter that file,
  so if you believe that's false positive, you will need to update the xml
  filter file
  to ignore it.
  
  http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/39057/findbugsResult/
  
  VdsDeploy.java:720, DLS_DEAD_LOCAL_STORE, Priority: Low
  Dead store to cer in
  org.ovirt.engine.core.bll.hostdeploy.VdsDeploy.processEvent(Event$Base)
  
  This instruction assigns a value to a local variable, but the value is not
  read or used in any subsequent instruction. Often, this indicates an error,
  because the value computed is never used.
  
  Note that Sun's javac compiler often generates dead stores for final local
  variables. Because FindBugs is a bytecode-based tool, there is no easy way
  to eliminate these false positives.
  
 
 this is what this patch is fixing
 i think you are looking on the wrong result
 
 jenkins fails at
 http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/39053/
 
  
  
  - Original Message -
   From: Omer Frenkel ofren...@redhat.com
   To: infra infra@ovirt.org
   Sent: Tuesday, June 16, 2015 3:09:58 PM
   Subject: findbugs fails with no reason
   
   can someone please check
   https://gerrit.ovirt.org/#/c/42422/
   Thanks
   ___
   Infra mailing list
   Infra@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/infra
   
   
   
  
  --
  Eyal Edri
  Supervisor, RHEV CI
  EMEA ENG Virtualization RD
  Red Hat Israel
  
  phone: +972-9-7692018
  irc: eedri (on #tlv #rhev-dev #rhev-integ)
  
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra
 
 
 

-- 
Eyal Edri
Supervisor, RHEV CI
EMEA ENG Virtualization RD
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: findbugs fails with no reason

2015-06-16 Thread Eyal Edri
the findbugs yells on vdsdeploy file, and the patch did alter that file,
so if you believe that's false positive, you will need to update the xml filter 
file
to ignore it.

http://jenkins.ovirt.org/job/ovirt-engine_master_find-bugs_gerrit/39057/findbugsResult/

VdsDeploy.java:720, DLS_DEAD_LOCAL_STORE, Priority: Low
Dead store to cer in 
org.ovirt.engine.core.bll.hostdeploy.VdsDeploy.processEvent(Event$Base)

This instruction assigns a value to a local variable, but the value is not read 
or used in any subsequent instruction. Often, this indicates an error, because 
the value computed is never used.

Note that Sun's javac compiler often generates dead stores for final local 
variables. Because FindBugs is a bytecode-based tool, there is no easy way to 
eliminate these false positives.



- Original Message -
 From: Omer Frenkel ofren...@redhat.com
 To: infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 3:09:58 PM
 Subject: findbugs fails with no reason
 
 can someone please check
 https://gerrit.ovirt.org/#/c/42422/
 Thanks
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra
 
 
 

-- 
Eyal Edri
Supervisor, RHEV CI
EMEA ENG Virtualization RD
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Jenkins CI granting excessive +1s

2015-06-16 Thread Dan Kenigsberg
As of yesterday, Jenkins CI starting granting CR+1, V+1 and CI+1 for
every patchset that it successfully passed.

Was this change somehow intentional?

It is confusing and unwanted. Only a human developer can give a
meaningful Code-Review. Only a human user/QE can say that a patch solved
the relevant bug and grant it Verfied+1. The flags that Jenkins grant
are meaningless.

Can this be stopped? Jenkins should give CI+1 if it's happe with a
patch, or CI-1 if it is has not yet run (or failed).

Regards,
Dan.

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins CI granting excessive +1s

2015-06-16 Thread Eyal Edri
Sounds like a bug.
Jenkins should only use the CI flag and not change CR and V flags.

Might have been a misunderstanding.
David, can you fix it so Jenkins will only update CI flag?

E.

Eyal Edri
Supervisor, RHEV CI
Red Hat

From: Dan Kenigsberg
Sent: Jun 16, 2015 7:06 PM
To: infra@ovirt.org
Cc: ee...@redhat.com; dc...@redhat.com
Subject: Jenkins CI granting excessive +1s

 As of yesterday, Jenkins CI starting granting CR+1, V+1 and CI+1 for 
 every patchset that it successfully passed. 

 Was this change somehow intentional? 

 It is confusing and unwanted. Only a human developer can give a 
 meaningful Code-Review. Only a human user/QE can say that a patch solved 
 the relevant bug and grant it Verfied+1. The flags that Jenkins grant 
 are meaningless. 

 Can this be stopped? Jenkins should give CI+1 if it's happe with a 
 patch, or CI-1 if it is has not yet run (or failed). 

 Regards, 
 Dan. 

___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: Jenkins CI granting excessive +1s

2015-06-16 Thread dcaro
On 06/16, Eyal Edri wrote:
 Sounds like a bug.
 Jenkins should only use the CI flag and not change CR and V flags.
 
 Might have been a misunderstanding.
 David, can you fix it so Jenkins will only update CI flag?

Can someone pass me an example of the patch that had the reviews? It's possible
that previously the jobs were manually modified to only set the ci flag and we
updated them on yaml and the manual changes got reverted. Having a sample patch
would allow me to trim down the list of possible jobs that give that review.


That can be prevented globally easily, but until all projects have the ci flag
we can't enforce it globally.

 
 E.
 
 Eyal Edri
 Supervisor, RHEV CI
 Red Hat
 
 From: Dan Kenigsberg
 Sent: Jun 16, 2015 7:06 PM
 To: infra@ovirt.org
 Cc: ee...@redhat.com; dc...@redhat.com
 Subject: Jenkins CI granting excessive +1s
 
  As of yesterday, Jenkins CI starting granting CR+1, V+1 and CI+1 for 
  every patchset that it successfully passed. 
 
  Was this change somehow intentional? 
 
  It is confusing and unwanted. Only a human developer can give a 
  meaningful Code-Review. Only a human user/QE can say that a patch solved 
  the relevant bug and grant it Verfied+1. The flags that Jenkins grant 
  are meaningless. 
 
  Can this be stopped? Jenkins should give CI+1 if it's happe with a 
  patch, or CI-1 if it is has not yet run (or failed). 
 
  Regards, 
  Dan. 
 

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization RD

Tel.: +420 532 294 605
Email: dc...@redhat.com
Web: www.redhat.com
RHT Global #: 82-62605


pgp80eK7zahH6.pgp
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: migration of services from linode server to phx2 lab

2015-06-16 Thread Michael Scherer
Le mardi 16 juin 2015 à 03:38 -0400, Eyal Edri a écrit :
 
 - Original Message -
  From: Karsten Wade kw...@redhat.com
  To: infra@ovirt.org
  Sent: Tuesday, June 16, 2015 6:29:04 AM
  Subject: Re: migration of services from linode server to phx2 lab
  
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
  
  Hi,
  
  Between myself (original linode01.ovirt.org admin) and Michael (misc,
  aka knower-of-stuff), what can we do to get us off this Linode
  instance? From what I can tell, at least Mailman is running from there.
  
  If we still need service failover, can we switch to another Red
  Hat-provided service such as an OpenShift or OpenStack instance?
 
 Hi Karsten,
 I know this has been taking for too long, but unfortunately there are many 
 reasons
 and obstacles that prevented us from moving, which i'll explain below,
 while there is a risk right now of moving services to PHX, but i think we can 
 try.
 
 Blocking items:
  - were missing more hypervisors to the production DC ( we finally got them 
 installed last week, and are now in final stages of bringing them up)
  - storage issue - currently NFS storage is quite slow, we are testing moving 
 to a mixed mode of local + nfs for the jenkins slaves,
though might be that the production service won't get 
 affected too much - worth a try.
  - lack of monitoring for servers, which might add risk if we hit an issue.
 
 there are some other issues, but not blocking imo.
 
 Here is what needs to be done (per service to migrate off linode), and we'd 
 appreciate any help given.
 1. create new VM on production DC in PHX (assign dns/ip/etc.. )
 2. create puppet manifests to manage that vm, so it will be easy to reproduce 
 it and maintain it
 3. install the relevant software on it (for e.g mailman/ircbot/etc...)
 4. test to see it works
 5. do the actual migration (downtime of existing service, and bringing up the 
 new one)

So, last time I did look, the puppet setup was a bit unfinished rather
overkill ( using r10k to deploy count as overkill when you do not have
stage/prod/qa, at least for me ). Did stuff changed, or should first
some effort be made to fix/automate/improve the puppet setup ?


-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Logwatch for linode01.ovirt.org (Linux)

2015-06-16 Thread logwatch

 ### Logwatch 7.3.6 (05/19/07)  
Processing Initiated: Tue Jun 16 03:46:26 2015
Date Range Processed: yesterday
  ( 2015-Jun-15 )
  Period is day.
  Detail Level of Output: 0
  Type of Output: unformatted
   Logfiles for Host: linode01.ovirt.org
  ## 
 
 - httpd Begin  

 Requests with error response codes
400 Bad Request
   /: 2 Time(s)
   /blog/robots.txt: 1 Time(s)
   /tmUnblock.cgi: 1 Time(s)
404 Not Found
   //wp-admin/admin-ajax.php?action=revslider ... ./wp-config.php: 1 Time(s)
   //xmlrpc.php: 1 Time(s)
   /_h5ai/client/images/app-16x16.ico: 10 Time(s)
   /admin.php: 14 Time(s)
   /admin/: 13 Time(s)
   /admin/banner_manager.php/login.php: 20 Time(s)
   /admin/categories.php/login.php: 20 Time(s)
   /admin/file_manager.php/login.php: 20 Time(s)
   /admin/login.php: 13 Time(s)
   /administrator/components/com_acymailing/i ... ?name=magic.php: 12 
Time(s)
   /administrator/components/com_acymailing/i ... e=magic.php.pHp: 12 
Time(s)
   /administrator/components/com_civicrm/civi ... ?name=magic.php: 12 
Time(s)
   /administrator/components/com_civicrm/civi ... e=magic.php.pHp: 12 
Time(s)
   /administrator/components/com_jinc/classes ... ?name=magic.php: 12 
Time(s)
   /administrator/components/com_jinc/classes ... e=magic.php.pHp: 12 
Time(s)
   /administrator/components/com_jnews/includ ... ?name=magic.php: 12 
Time(s)
   /administrator/components/com_jnews/includ ... e=magic.php.pHp: 12 
Time(s)
   /administrator/components/com_jnewsletter/ ... ?name=magic.php: 12 
Time(s)
   /administrator/components/com_jnewsletter/ ... e=magic.php.pHp: 12 
Time(s)
   /administrator/components/com_maian15/char ... ?name=magic.php: 12 
Time(s)
   /administrator/components/com_maian15/char ... e=magic.php.pHp: 12 
Time(s)
   /administrator/index.php: 14 Time(s)
   /apple-touch-icon-precomposed.png: 3 Time(s)
   /apple-touch-icon.png: 3 Time(s)
   /bitrix/admin/index.php?lang=en: 13 Time(s)
   /blog/: 1 Time(s)
   /blog/wp-admin/: 24 Time(s)
   /category/news/feed: 2 Time(s)
   /category/news/feed/: 15 Time(s)
   /components/com_acymailing/inc/openflash/p ... ?name=magic.php: 12 
Time(s)
   /components/com_acymailing/inc/openflash/p ... e=magic.php.pHp: 12 
Time(s)
   /components/com_civicrm/civicrm/packages/O ... ?name=magic.php: 12 
Time(s)
   /components/com_civicrm/civicrm/packages/O ... e=magic.php.pHp: 12 
Time(s)
   /components/com_jinc/classes/graphics/php- ... ?name=magic.php: 12 
Time(s)
   /components/com_jinc/classes/graphics/php- ... e=magic.php.pHp: 12 
Time(s)
   /components/com_jnews/includes/openflashch ... ?name=magic.php: 12 
Time(s)
   /components/com_jnews/includes/openflashch ... e=magic.php.pHp: 12 
Time(s)
   /components/com_jnewsletter/includes/openf ... ?name=magic.php: 12 
Time(s)
   /components/com_jnewsletter/includes/openf ... e=magic.php.pHp: 12 
Time(s)
   /components/com_maian15/charts/php-ofc-lib ... ?name=magic.php: 12 
Time(s)
   /components/com_maian15/charts/php-ofc-lib ... e=magic.php.pHp: 12 
Time(s)
   /dbreports/engine: 1 Time(s)
   /favicon.ico: 758 Time(s)
   /feng/readme.txt: 1 Time(s)
   /fengoffice/readme.txt: 1 Time(s)
   /index.php?gf_page=upload: 1 Time(s)
   /js/plugin.php: 1 Time(s)
   /m/pipermail/users/2012-November/010647.html: 1 Time(s)
   /m/pipermail/users/2012-November/010838.html: 1 Time(s)
   /m/pipermail/users/2013-November/018309.html: 1 Time(s)
   /m/pipermail/users/2014-September/027700.html: 2 Time(s)
   /m/pipermail/users/2015-January/030740.html: 2 Time(s)
   /meetings/ovirt/2015/candidates: 1 Time(s)
   /mobile/pipermail/node-devel/2013-August/000503.html: 1 Time(s)
   /mobile/pipermail/users/2012-May/007779.html: 2 Time(s)
   /mobile/pipermail/users/2013-November/018309.html: 1 Time(s)
   /mobile/pipermail/users/2014-September/027700.html: 1 Time(s)
   /office/readme.txt: 1 Time(s)
   /old/wp-admin/: 25 Time(s)
   /ovirt-release-el6-10.0.1-3.noarch.rpm: 1 Time(s)
   /ovirt-release-el6-8-1.noarch.rpm: 1 Time(s)
   /pipermail/index.php?act=RegCODE=00: 3 Time(s)
   /pipermail/index.php?app=coremodule=globalsection=register: 2 Time(s)
   /pipermail/infra/2012-September/admin/bann ... r.php/login.php: 1 Time(s)
   /pipermail/infra/2012-September/admin/cate ... s.php/login.php: 1 Time(s)
   /pipermail/infra/2012-September/admin/file ... r.php/login.php: 1 Time(s)
   /pipermail/infra/2012-august/000899.html: 1 Time(s)
   /pipermail/infra/2012-december/001588.html: 1 Time(s)
   

vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged have been disabled

2015-06-16 Thread Sandro Bonazzola
Hi,
I disabled vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged 
since they get stuck testing jsonrpc and cause jenkins job to grow over
500 jobs taking busy for days the slaves.

Please either fix vdsm or the unit tests before enabling these jobs again.
Thanks.
-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged have been disabled

2015-06-16 Thread Piotr Kliczewski
On Tue, Jun 16, 2015 at 9:05 AM, Eyal Edri ee...@redhat.com wrote:
 thanks for the quick action on this,
 any idea why it was stuck? we solved the mom issues yesterday by updating the 
 mock
 repos to take latest mom from master snapshot repo.

 s.

 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: de...@ovirt.org, infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 9:56:36 AM
 Subject: [ovirt-devel] vdsm_master_unit-tests_created and 
 vdsm_master_unit-tests_merged have been disabled

 Hi,
 I disabled vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged
 since they get stuck testing jsonrpc and cause jenkins job to grow over
 500 jobs taking busy for days the slaves.


Are there any logs that I could take a look at?

 Please either fix vdsm or the unit tests before enabling these jobs again.
 Thanks.
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


 --
 Eyal Edri
 Supervisor, RHEV CI
 EMEA ENG Virtualization RD
 Red Hat Israel

 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


[ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Sandro Bonazzola
http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console

15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Eyal Edri
looking at the slave it has 12G free:

[eedri@fc20-vm06 ~]$ df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/vda318G  4.9G   12G  30% /
devtmpfs4.4G 0  4.4G   0% /dev
tmpfs   4.4G 0  4.4G   0% /dev/shm
tmpfs   4.4G  368K  4.4G   1% /run
tmpfs   4.4G 0  4.4G   0% /sys/fs/cgroup
tmpfs   4.4G  424K  4.4G   1% /tmp
/dev/vda193M   71M   16M  83% /boot

can't know which slave it used since 66 build is no longer there.

e.


- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: Fabian Deutsch fabi...@redhat.com, infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 11:43:26 AM
 Subject: [ticket] not enough disk space on slaves for building node appliance
 
 http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
 
 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
 
 
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra
 
 
 

-- 
Eyal Edri
Supervisor, RHEV CI
EMEA ENG Virtualization RD
Red Hat Israel

phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread David Caro

The requirements have changed or something?

In the cleanup I see that the slave has 13GB free prior to starting to do
anything (as usual)

10:40:09 /dev/vda318G  4.6G   13G  28% /

On 06/16, Sandro Bonazzola wrote:
 http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
 
 15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
 
 
 -- 
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Infra mailing list
 Infra@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/infra

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization RD

Tel.: +420 532 294 605
Email: dc...@redhat.com
Web: www.redhat.com
RHT Global #: 82-62605


pgpYaOtbnUp2c.pgp
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Fabian Deutsch
- Original Message -
 
 The requirements have changed or something?

No, the requirements did not change.

But as discussed previously, the build process involves several images (because 
they need to be converted from qcow2 to ova), and this can require a lot of 
space.

We are currently very optimistic that 9 GB of free space work to build an image 
of (unspare) 5GB in size.

To have freedom we need 20GB of free space.

- fabian

 In the cleanup I see that the slave has 13GB free prior to starting to do
 anything (as usual)
 
 10:40:09 /dev/vda318G  4.6G   13G  28% /
 
 On 06/16, Sandro Bonazzola wrote:
  http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
  
  15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
  
  
  --
  Sandro Bonazzola
  Better technology. Faster innovation. Powered by community collaboration.
  See how it works at redhat.com
  ___
  Infra mailing list
  Infra@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/infra
 
 --
 David Caro
 
 Red Hat S.L.
 Continuous Integration Engineer - EMEA ENG Virtualization RD
 
 Tel.: +420 532 294 605
 Email: dc...@redhat.com
 Web: www.redhat.com
 RHT Global #: 82-62605
 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread David Caro

We are pending a major rebuild of all the slaves, the new ones should have 40G
disks, that would solve it, but that's been planned for so long that I'm not
sure it's something worth waiting for if it's blocking you.

Adding a tag to some slaves that 'should' have more space is something that
will create maintenance overload in the future (ad we already have too much) so
avoidable whenever possible.

How many slaves would you need for the jobs to run normally? (how many jobs do
you run daily?)

On 06/16, Fabian Deutsch wrote:
 - Original Message -
  
  The requirements have changed or something?
 
 No, the requirements did not change.
 
 But as discussed previously, the build process involves several images 
 (because they need to be converted from qcow2 to ova), and this can require a 
 lot of space.
 
 We are currently very optimistic that 9 GB of free space work to build an 
 image of (unspare) 5GB in size.
 
 To have freedom we need 20GB of free space.
 
 - fabian
 
  In the cleanup I see that the slave has 13GB free prior to starting to do
  anything (as usual)
  
  10:40:09 /dev/vda318G  4.6G   13G  28% /
  
  On 06/16, Sandro Bonazzola wrote:
   http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
   
   15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
   
   
   --
   Sandro Bonazzola
   Better technology. Faster innovation. Powered by community collaboration.
   See how it works at redhat.com
   ___
   Infra mailing list
   Infra@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/infra
  
  --
  David Caro
  
  Red Hat S.L.
  Continuous Integration Engineer - EMEA ENG Virtualization RD
  
  Tel.: +420 532 294 605
  Email: dc...@redhat.com
  Web: www.redhat.com
  RHT Global #: 82-62605
  

-- 
David Caro

Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization RD

Tel.: +420 532 294 605
Email: dc...@redhat.com
Web: www.redhat.com
RHT Global #: 82-62605


pgpgYrJxC1o4A.pgp
Description: PGP signature
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged have been disabled

2015-06-16 Thread Sandro Bonazzola
Il 16/06/2015 09:53, Piotr Kliczewski ha scritto:
 On Tue, Jun 16, 2015 at 9:05 AM, Eyal Edri ee...@redhat.com wrote:
 thanks for the quick action on this,
 any idea why it was stuck? we solved the mom issues yesterday by updating 
 the mock
 repos to take latest mom from master snapshot repo.

 s.

 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: de...@ovirt.org, infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 9:56:36 AM
 Subject: [ovirt-devel] vdsm_master_unit-tests_created and 
 vdsm_master_unit-tests_merged have been disabled

 Hi,
 I disabled vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged
 since they get stuck testing jsonrpc and cause jenkins job to grow over
 500 jobs taking busy for days the slaves.

 
 Are there any logs that I could take a look at?

http://jenkins.ovirt.org/job/vdsm_master_unit-tests_merged/287

 
 Please either fix vdsm or the unit tests before enabling these jobs again.
 Thanks.
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


 --
 Eyal Edri
 Supervisor, RHEV CI
 EMEA ENG Virtualization RD
 Red Hat Israel

 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ticket] not enough disk space on slaves for building node appliance

2015-06-16 Thread Fabian Deutsch
- Original Message -
 looking at the slave it has 12G free:
 
 [eedri@fc20-vm06 ~]$ df -h
 Filesystem  Size  Used Avail Use% Mounted on
 /dev/vda318G  4.9G   12G  30% /
 devtmpfs4.4G 0  4.4G   0% /dev
 tmpfs   4.4G 0  4.4G   0% /dev/shm
 tmpfs   4.4G  368K  4.4G   1% /run
 tmpfs   4.4G 0  4.4G   0% /sys/fs/cgroup
 tmpfs   4.4G  424K  4.4G   1% /tmp
 /dev/vda193M   71M   16M  83% /boot
 
 can't know which slave it used since 66 build is no longer there.

I can tell that this happens with seevral slaves currently.
In the last 10 days or so we had only a few successfull builds.

Do we have a label for slaves with big disks?

- fabian

 e.
 
 
 - Original Message -
  From: Sandro Bonazzola sbona...@redhat.com
  To: Fabian Deutsch fabi...@redhat.com, infra infra@ovirt.org
  Sent: Tuesday, June 16, 2015 11:43:26 AM
  Subject: [ticket] not enough disk space on slaves for building node
  appliance
  
  http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
  
  15:05:33 Max needed: 9.8G.  Free: 9.0G.  May need another 748.2M.
  
  
  --
  Sandro Bonazzola
  Better technology. Faster innovation. Powered by community collaboration.
  See how it works at redhat.com
  ___
  Infra mailing list
  Infra@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/infra
  
  
  
 
 --
 Eyal Edri
 Supervisor, RHEV CI
 EMEA ENG Virtualization RD
 Red Hat Israel
 
 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged have been disabled

2015-06-16 Thread Piotr Kliczewski
On Tue, Jun 16, 2015 at 10:25 AM, Sandro Bonazzola sbona...@redhat.com wrote:
 Il 16/06/2015 09:53, Piotr Kliczewski ha scritto:
 On Tue, Jun 16, 2015 at 9:05 AM, Eyal Edri ee...@redhat.com wrote:
 thanks for the quick action on this,
 any idea why it was stuck? we solved the mom issues yesterday by updating 
 the mock
 repos to take latest mom from master snapshot repo.

 s.

 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: de...@ovirt.org, infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 9:56:36 AM
 Subject: [ovirt-devel] vdsm_master_unit-tests_created and 
 vdsm_master_unit-tests_merged have been disabled

 Hi,
 I disabled vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged
 since they get stuck testing jsonrpc and cause jenkins job to grow over
 500 jobs taking busy for days the slaves.


 Are there any logs that I could take a look at?

 http://jenkins.ovirt.org/job/vdsm_master_unit-tests_merged/287


I rerun the job [1] and there seems to be no issue related to jsonrpc tests.
It failed but due to different issue. There was a patch which was merged
yesterday which hopefully fixed the issue. I will monitor the jobs to make
sure that the problem really do not exists.

[1] http://jenkins.ovirt.org/job/vdsm_master_unit-tests_merged/288/console


 Please either fix vdsm or the unit tests before enabling these jobs again.
 Thanks.
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


 --
 Eyal Edri
 Supervisor, RHEV CI
 EMEA ENG Virtualization RD
 Red Hat Israel

 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra


Re: [ovirt-devel] vdsm_master_unit-tests_created and vdsm_master_unit-tests_merged have been disabled

2015-06-16 Thread Sandro Bonazzola
Il 16/06/2015 13:27, Piotr Kliczewski ha scritto:
 On Tue, Jun 16, 2015 at 10:25 AM, Sandro Bonazzola sbona...@redhat.com 
 wrote:
 Il 16/06/2015 09:53, Piotr Kliczewski ha scritto:
 On Tue, Jun 16, 2015 at 9:05 AM, Eyal Edri ee...@redhat.com wrote:
 thanks for the quick action on this,
 any idea why it was stuck? we solved the mom issues yesterday by updating 
 the mock
 repos to take latest mom from master snapshot repo.

 s.

 - Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: de...@ovirt.org, infra infra@ovirt.org
 Sent: Tuesday, June 16, 2015 9:56:36 AM
 Subject: [ovirt-devel] vdsm_master_unit-tests_created and 
 vdsm_master_unit-tests_merged have been disabled

 Hi,
 I disabled vdsm_master_unit-tests_created and 
 vdsm_master_unit-tests_merged
 since they get stuck testing jsonrpc and cause jenkins job to grow over
 500 jobs taking busy for days the slaves.


 Are there any logs that I could take a look at?

 http://jenkins.ovirt.org/job/vdsm_master_unit-tests_merged/287

 
 I rerun the job [1] and there seems to be no issue related to jsonrpc tests.
 It failed but due to different issue. There was a patch which was merged
 yesterday which hopefully fixed the issue. I will monitor the jobs to make
 sure that the problem really do not exists.

Thanks


 
 [1] http://jenkins.ovirt.org/job/vdsm_master_unit-tests_merged/288/console
 

 Please either fix vdsm or the unit tests before enabling these jobs again.
 Thanks.
 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


 --
 Eyal Edri
 Supervisor, RHEV CI
 EMEA ENG Virtualization RD
 Red Hat Israel

 phone: +972-9-7692018
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 de...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel


 --
 Sandro Bonazzola
 Better technology. Faster innovation. Powered by community collaboration.
 See how it works at redhat.com


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Infra mailing list
Infra@ovirt.org
http://lists.ovirt.org/mailman/listinfo/infra