Re: builds failing on H9 with cannot access java.lang.Runnable

2014-10-03 Thread Giridharan Kesavan
all the slaves are getting re-booted give it some more time

-giri

On Fri, Oct 3, 2014 at 1:13 PM, Ted Yu yuzhih...@gmail.com wrote:

 Adding builds@

 On Fri, Oct 3, 2014 at 1:07 PM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:

  It looks like builds are failing on the H9 host with cannot access
  java.lang.Runnable
 
  Example from
 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8313/artifact/patchprocess/trunkJavacWarnings.txt
  :
 
  [INFO]
  
  [INFO] BUILD FAILURE
  [INFO]
  
  [INFO] Total time: 03:13 min
  [INFO] Finished at: 2014-10-03T18:04:35+00:00
  [INFO] Final Memory: 57M/839M
  [INFO]
  
  [ERROR] Failed to execute goal
  org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile
  (default-testCompile) on project hadoop-mapreduce-client-app:
  Compilation failure
  [ERROR]
 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java:[189,-1]
  cannot access java.lang.Runnable
  [ERROR] bad class file:
 java/lang/Runnable.class(java/lang:Runnable.class)
 
  I don't have shell access to this, does anyone know what's going on on
 H9?
 
  best,
  Colin
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Git repo ready to use

2014-08-28 Thread Giridharan Kesavan
I'm looking into it.

-giri


On Thu, Aug 28, 2014 at 3:18 AM, Ted Yu yuzhih...@gmail.com wrote:

 I spent some time on PreCommit-hdfs-Build.
 Looks like the following command was not effective:

 mkdir -p ${WORKSPACE}/patchprocess

 In build output, I saw:


 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/jira:
 No such file or directory


 I will work with Giri in the morning.


 Cheers



 On Thu, Aug 28, 2014 at 2:04 AM, Ted Yu yuzhih...@gmail.com wrote:

  build #7808 failed due to QA bot trying to apply the following as patch:
 
 
 http://issues.apache.org/jira/secure/attachment/12552318/dfsio-x86-trunk-vs-3529.png
 
 
  FYI
 
 
 
  On Thu, Aug 28, 2014 at 1:52 AM, Ted Yu yuzhih...@gmail.com wrote:
 
  I modified config for the following builds:
 
  https://builds.apache.org/job/PreCommit-HDFS-Build/build #7808
 would
  be checking out trunk using git.
 
  https://builds.apache.org/job/PreCommit-yarn-Build/
  https://builds.apache.org/job/PreCommit-mapreduce-Build/
 
  Should I modify the other Jenkins jobs e.g.:
 
  https://builds.apache.org/job/Hadoop-Yarn-trunk/
 
  Cheers
 
 
  On Wed, Aug 27, 2014 at 11:25 PM, Karthik Kambatla ka...@cloudera.com
  wrote:
 
  We just got HADOOP-11001 in. If you have access, can you please try
  modifying the Jenkins jobs taking the patch on HADOOP-11001 into
  consideration.
 
 
 
  On Wed, Aug 27, 2014 at 4:38 PM, Ted Yu yuzhih...@gmail.com wrote:
 
   I have access.
  
   I can switch the repository if you think it is time to do so.
  
  
   On Wed, Aug 27, 2014 at 4:35 PM, Karthik Kambatla 
 ka...@cloudera.com
   wrote:
  
Thanks for reporting it, Ted. We are aware of it - second follow-up
  item
   in
my earlier email.
   
Unfortunately, I don't have access to the builds to fix them and
  don't
quite know the procedure to get access either. I am waiting for
  someone
with access to help us out.
   
   
On Wed, Aug 27, 2014 at 3:45 PM, Ted Yu yuzhih...@gmail.com
 wrote:
   
 Precommit builds are still using svn :

 https://builds.apache.org/job/PreCommit-HDFS-Build/configure
 https://builds.apache.org/job/PreCommit-YARN-Build/configure

 FYI


 On Wed, Aug 27, 2014 at 7:00 AM, Ted Yu yuzhih...@gmail.com
  wrote:

  Currently Jenkins builds still use subversion as source.
 
  Should Jenkins point to git ?
 
  Cheers
 
 
  On Wed, Aug 27, 2014 at 1:40 AM, Karthik Kambatla 
   ka...@cloudera.com
  wrote:
 
  Oh.. a couple more things.
 
  The git commit hashes have changed and are different from what
  we
   had
on
  our github. This might interfere with any build automations
 that
   folks
  have.
 
  Another follow-up item: email and JIRA integration
 
 
  On Wed, Aug 27, 2014 at 1:33 AM, Karthik Kambatla 
   ka...@cloudera.com

  wrote:
 
   Hi folks,
  
   I am very excited to let you know that the git repo is now
writable. I
   committed a few changes (CHANGES.txt fixes and branching for
   2.5.1)
 and
   everything looks good.
  
   Current status:
  
  1. All branches have the same names, including trunk.
  2. Force push is disabled on trunk, branch-2 and tags.
  3. Even if you are experienced with git, take a look at
  https://wiki.apache.org/hadoop/HowToCommitWithGit .
Particularly,
  let
  us avoid merge commits.
  
   Follow-up items:
  
  1. Update rest of the wiki documentation
  2. Update precommit Jenkins jobs and get HADOOP-11001
  committed
  (reviews appreciated). Until this is done, the precommit
  jobs
will
  run
  against our old svn repo.
  3. git mirrors etc. to use the new repo instead of the
 old
  svn
 repo.
  
   Thanks again for your cooperation through the migration
  process.
 Please
   reach out to me (or the list) if you find anything missing
 or
  have
   suggestions.
  
   Cheers!
   Karthik
  
  
 
 
 

   
  
 
 
 
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Git repo ready to use

2014-08-28 Thread Giridharan Kesavan
Fixed all the 3 pre-commit buids. test-patch's git reset --hard is removing
the patchprocess dir, so moved it off the workspace.



-giri


On Thu, Aug 28, 2014 at 8:48 AM, Giridharan Kesavan 
gkesa...@hortonworks.com wrote:

 I'm looking into it.

 -giri


 On Thu, Aug 28, 2014 at 3:18 AM, Ted Yu yuzhih...@gmail.com wrote:

 I spent some time on PreCommit-hdfs-Build.
 Looks like the following command was not effective:

 mkdir -p ${WORKSPACE}/patchprocess

 In build output, I saw:


 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/jira:
 No such file or directory


 I will work with Giri in the morning.


 Cheers



 On Thu, Aug 28, 2014 at 2:04 AM, Ted Yu yuzhih...@gmail.com wrote:

  build #7808 failed due to QA bot trying to apply the following as patch:
 
 
 http://issues.apache.org/jira/secure/attachment/12552318/dfsio-x86-trunk-vs-3529.png
 
 
  FYI
 
 
 
  On Thu, Aug 28, 2014 at 1:52 AM, Ted Yu yuzhih...@gmail.com wrote:
 
  I modified config for the following builds:
 
  https://builds.apache.org/job/PreCommit-HDFS-Build/build #7808
 would
  be checking out trunk using git.
 
  https://builds.apache.org/job/PreCommit-yarn-Build/
  https://builds.apache.org/job/PreCommit-mapreduce-Build/
 
  Should I modify the other Jenkins jobs e.g.:
 
  https://builds.apache.org/job/Hadoop-Yarn-trunk/
 
  Cheers
 
 
  On Wed, Aug 27, 2014 at 11:25 PM, Karthik Kambatla ka...@cloudera.com
 
  wrote:
 
  We just got HADOOP-11001 in. If you have access, can you please try
  modifying the Jenkins jobs taking the patch on HADOOP-11001 into
  consideration.
 
 
 
  On Wed, Aug 27, 2014 at 4:38 PM, Ted Yu yuzhih...@gmail.com wrote:
 
   I have access.
  
   I can switch the repository if you think it is time to do so.
  
  
   On Wed, Aug 27, 2014 at 4:35 PM, Karthik Kambatla 
 ka...@cloudera.com
   wrote:
  
Thanks for reporting it, Ted. We are aware of it - second
 follow-up
  item
   in
my earlier email.
   
Unfortunately, I don't have access to the builds to fix them and
  don't
quite know the procedure to get access either. I am waiting for
  someone
with access to help us out.
   
   
On Wed, Aug 27, 2014 at 3:45 PM, Ted Yu yuzhih...@gmail.com
 wrote:
   
 Precommit builds are still using svn :

 https://builds.apache.org/job/PreCommit-HDFS-Build/configure
 https://builds.apache.org/job/PreCommit-YARN-Build/configure

 FYI


 On Wed, Aug 27, 2014 at 7:00 AM, Ted Yu yuzhih...@gmail.com
  wrote:

  Currently Jenkins builds still use subversion as source.
 
  Should Jenkins point to git ?
 
  Cheers
 
 
  On Wed, Aug 27, 2014 at 1:40 AM, Karthik Kambatla 
   ka...@cloudera.com
  wrote:
 
  Oh.. a couple more things.
 
  The git commit hashes have changed and are different from
 what
  we
   had
on
  our github. This might interfere with any build automations
 that
   folks
  have.
 
  Another follow-up item: email and JIRA integration
 
 
  On Wed, Aug 27, 2014 at 1:33 AM, Karthik Kambatla 
   ka...@cloudera.com

  wrote:
 
   Hi folks,
  
   I am very excited to let you know that the git repo is now
writable. I
   committed a few changes (CHANGES.txt fixes and branching
 for
   2.5.1)
 and
   everything looks good.
  
   Current status:
  
  1. All branches have the same names, including trunk.
  2. Force push is disabled on trunk, branch-2 and tags.
  3. Even if you are experienced with git, take a look at
  https://wiki.apache.org/hadoop/HowToCommitWithGit .
Particularly,
  let
  us avoid merge commits.
  
   Follow-up items:
  
  1. Update rest of the wiki documentation
  2. Update precommit Jenkins jobs and get HADOOP-11001
  committed
  (reviews appreciated). Until this is done, the precommit
  jobs
will
  run
  against our old svn repo.
  3. git mirrors etc. to use the new repo instead of the
 old
  svn
 repo.
  
   Thanks again for your cooperation through the migration
  process.
 Please
   reach out to me (or the list) if you find anything missing
 or
  have
   suggestions.
  
   Cheers!
   Karthik
  
  
 
 
 

   
  
 
 
 
 




-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: TestIPC failures in Jenkins

2014-07-26 Thread Giridharan Kesavan
ASF infra folks doesn't support jenkins build infrastructure. You should
send an email to bui...@apache.org


-giri


On Fri, Jul 25, 2014 at 7:13 PM, Yongjun Zhang yzh...@cloudera.com wrote:

 Thanks Ted, I agree. As reported in INFRA-8097, I have seen this at least
 with two different testcases. I just changed it to critical.

 --Yongjun


 On Fri, Jul 25, 2014 at 7:04 PM, Ted Yu yuzhih...@gmail.com wrote:

  In my opinion, INFRA-8097
  https://issues.apache.org/jira/browse/INFRA-8097 should
  be critical.
 
  Cheers
 
 
  On Fri, Jul 25, 2014 at 6:46 AM, Yongjun Zhang yzh...@cloudera.com
  wrote:
 
   Thanks Ted, I just filed
  https://issues.apache.org/jira/browse/INFRA-8097.
  
   --Yongjun
  
  
   On Thu, Jul 24, 2014 at 10:23 PM, Ted Yu yuzhih...@gmail.com wrote:
  
Have you filed an INFRA JIRA so that infrastructure team can fix
 this ?
   
Cheers
   
   
On Thu, Jul 24, 2014 at 9:18 PM, Yongjun Zhang yzh...@cloudera.com
wrote:
   
 Many thanks to Arpit for observing an extra newline in the dumped
 /etc/hosts:

 127.0.0.1   localhost

asf900.ygridcore.net   *asf900*


 would anyone who has administrator access please help to take a
 look?

 Specifically, the above  should be

 127.0.0.1   localhost asf900.ygridcore.net*asf900*


 It would be good to look at all hosts here
 https://builds.apache.org/computer/Hidx/

 (https://builds.apache.org/computer/H0/,
 https://builds.apache.org/computer/H01/ ...)

 because they might have same issue.

 Thanks a lot.

 --Yongjun





 On Thu, Jul 24, 2014 at 8:22 PM, Yongjun Zhang 
 yzh...@cloudera.com
 wrote:

  HI,
 
  I did a test run which dumped /etc/hosts and reported hostname
  info:
 
  YJD */etc/hosts* contents:
 
 
  127.0.0.1 localhost
 
 asf900.ygridcore.net   *asf900*
 
   # The following lines are desirable for IPv6 capable hosts
 
  ::1 localhost ip6-localhost ip6-loopback
 
  ff02::1 ip6-allnodes
 
  ff02::2 ip6-allrouters
 
   YJD *hostname* contents:
 
  asf900
 
 
  (see HADOOP-10888
 
 
   https://builds.apache.org/job/PreCommit-HADOOP-Build/4362//testReport/
,
 
  the host is Slave H0 (Build slave for Hadoop project builds :
 asf900.gq1.ygridcore.net)
 
  )
 
 
  I see hostname asf900 in 127.0.0.1 row for IPv4  but I don't
  see
   it
 in the ::1 row for IPv6 in /etc/hosts file. I wonder if adding
   asf900
 as an entry to ::1 row would make it work. The method stuck at is
 java.net.Inet4AddressImpl.getLocalHostName (IPv4) though.
 
 
  Thanks.
 
  --Yongjun
 
  On Wed, Jul 23, 2014 at 10:14 PM, Yongjun Zhang 
  yzh...@cloudera.com
   
  wrote:
 
  Thanks Arpit for throwing this discussion as part of
 HADOOP-10888
  investigation! It's a good guess of Arpit's about possible
 missing
  /etc/hosts entry.
 
  Please feel free to comment in HADOOP-10888 so information can
 be
  centralized there.
 
  Best regards,
 
  --Yongjun
 
 
 
  On Wed, Jul 23, 2014 at 9:07 PM, Arpit Agarwal 
 aagar...@hortonworks.com
  wrote:
 
  Can someone with administrator access to the Jenkins VMs please
   take
a
  look
  at the /etc/hosts configuration?
 
  TestIPC often fails in Jenkins runs due to a timeout in
  InetAddress.getLocalHost. Most likely a missing entry in
  /etc/hosts
for
  the
  system hostname.
 
  e.g.
 
 

   
  
 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/4352//testReport/org.apache.hadoop.ipc/TestIPC/testRetryProxy/
 
 
 

   
  
 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/4355/testReport/org.apache.hadoop.ipc/TestIPC/testRetryProxy/
 
 
 

   
  
 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/4347/testReport/org.apache.hadoop.ipc/TestIPC/testRetryProxy/
 
  java.lang.Exception: test timed out after 30 milliseconds
at java.net.Inet4AddressImpl.getLocalHostName(Native Method)
at java.net.InetAddress.getLocalHost(InetAddress.java:1374)
at
 org.apache.hadoop.net.NetUtils.getConnectAddress(NetUtils.java:372)
at
 org.apache.hadoop.net.NetUtils.getConnectAddress(NetUtils.java:359)
at
 
 

   
  
 
 org.apache.hadoop.ipc.TestIPC$TestInvocationHandler.invoke(TestIPC.java:212)
at org.apache.hadoop.ipc.$Proxy11.dummyRun(Unknown Source)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown
 Source)
at
 
 

   
  
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
 
 
  

Re: Jenkins Build Slaves

2014-07-10 Thread Giridharan Kesavan
Chris,

Newer hosts are on ubuntu-14.04 and I'm not aware of any ln related change
there.
Please let me know if you need any help with debugging the hosts.

-giri


On Thu, Jul 10, 2014 at 9:53 AM, Chris Nauroth cnaur...@hortonworks.com
wrote:

 Thanks, Giri, for taking care of pkgconfig.

 It looks like most (all?) pre-commit builds have some new failing tests:

 https://builds.apache.org/job/PreCommit-HADOOP-Build/4247/testReport/

 On the symlink tests, is there any chance that the new hosts have a
 different version/different behavior for the ln command?

 The TestIPC failure is in a stress test that checks behavior after
 spamming a lot of connections at an RPC server.  Maybe the new hosts have
 something different in the TCP stack, such as TCP backlog?

 I likely won't get a chance to investigate any more today, but I wanted to
 raise the issue in case someone else gets an opportunity to look.

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Wed, Jul 9, 2014 at 10:33 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 I dont think so, let me fix that. Thanks Chris for pointing that out.


 -giri


 On Wed, Jul 9, 2014 at 9:50 AM, Chris Nauroth cnaur...@hortonworks.com
 wrote:

 Hi Giri,

 Is pkgconfig deployed on the new Jenkins slaves?  I noticed this build
 failed:

 https://builds.apache.org/job/PreCommit-HADOOP-Build/4237/

 Looking in the console output, it appears the HDFS native code failed to
 build due to missing pkgconfig.

  [exec] CMake Error at
 /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108
 (message):
  [exec]   Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Wed, Jul 9, 2014 at 7:08 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 Build jobs are now configured to run on the newer set of slaves.



 -giri


 On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan 
 gkesa...@hortonworks.com
  wrote:

  All
 
  Yahoo is in the process of retiring all the hadoop jenkins build
 slaves,
  *hadoop[1-9]* and

  replace them with a newer set of beefier hosts. These new machines are
  configured
  with *ubuntu-14.04*.

 
  Over the next couple of days I will be configuring the build jobs to
 run
  on these newly
  configured build slaves.  To automate the installation of tools and
 build
  libraries I have
  put together ansible scripts and here is the link to the toolchain
 repo.
 
 
  *https://github.com/apache/toolchain 
 https://github.com/apache/toolchain

  *
 
  During the transition, the old build slave will be accessible, and
  expected to be shutdown by 07/15.
 
  I will send out an update later this week when this transition is
  complete.
 
  *Mean while, I would like to request the project owners to
 remove/cleanup
  any stale *
  *jenkins job for their respective project and help with any builds
 issue
  to make this *
  *transition seamless. *
 
  Thanks
 
  -
  Giri
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
 entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the
 reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.






-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins build fails

2014-07-09 Thread Giridharan Kesavan
I'm looking into this.

-giri


On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp
wrote:

 Filed https://issues.apache.org/jira/browse/HADOOP-10804
 Please correct me if I am wrong..

 Thanks,
 Akira

 (2014/07/09 11:24), Akira AJISAKA wrote:
  Hi Hadoop developers,
 
  Now Jenkins is failing with the below message.
  I'm thinking this is caused by the upgrade of Jenkins server.
  After the upgrade, the version of svn client was also upgraded,
  so the following errors occurred.
 
  It will be fixed by executing 'svn upgrade' before executing
  other svn commands. I'll file a JIRA and create a patch shortly.
 
  Regards,
  Akira
 
  ==
  ==
   Testing patch for HADOOP-10661.
  ==
  ==
 
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  Build step 'Execute shell' marked build as failure
  Archiving artifacts
  Description set: HADOOP-10661
  Recording test results
  Finished: FAILURE
 



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins build fails

2014-07-09 Thread Giridharan Kesavan
I took care of the svn upgrade issue


-giri


On Wed, Jul 9, 2014 at 5:05 AM, Giridharan Kesavan gkesa...@hortonworks.com
 wrote:


 I'm looking into this.

 -giri


 On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp
 wrote:

 Filed https://issues.apache.org/jira/browse/HADOOP-10804
 Please correct me if I am wrong..

 Thanks,
 Akira

 (2014/07/09 11:24), Akira AJISAKA wrote:
  Hi Hadoop developers,
 
  Now Jenkins is failing with the below message.
  I'm thinking this is caused by the upgrade of Jenkins server.
  After the upgrade, the version of svn client was also upgraded,
  so the following errors occurred.
 
  It will be fixed by executing 'svn upgrade' before executing
  other svn commands. I'll file a JIRA and create a patch shortly.
 
  Regards,
  Akira
 
  ==
  ==
   Testing patch for HADOOP-10661.
  ==
  ==
 
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  Build step 'Execute shell' marked build as failure
  Archiving artifacts
  Description set: HADOOP-10661
  Recording test results
  Finished: FAILURE
 




-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins Build Slaves

2014-07-09 Thread Giridharan Kesavan
Build jobs are now configured to run on the newer set of slaves.



-giri


On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan gkesa...@hortonworks.com
 wrote:

 All

 Yahoo is in the process of retiring all the hadoop jenkins build slaves,
 *hadoop[1-9]* and
 replace them with a newer set of beefier hosts. These new machines are
 configured
 with *ubuntu-14.04*.

 Over the next couple of days I will be configuring the build jobs to run
 on these newly
 configured build slaves.  To automate the installation of tools and build
 libraries I have
 put together ansible scripts and here is the link to the toolchain repo.


 *https://github.com/apache/toolchain https://github.com/apache/toolchain
 *

 During the transition, the old build slave will be accessible, and
 expected to be shutdown by 07/15.

 I will send out an update later this week when this transition is
 complete.

 *Mean while, I would like to request the project owners to remove/cleanup
 any stale *
 *jenkins job for their respective project and help with any builds issue
 to make this *
 *transition seamless. *

 Thanks

 -
 Giri


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins Build Slaves

2014-07-09 Thread Giridharan Kesavan
I dont think so, let me fix that. Thanks Chris for pointing that out.


-giri


On Wed, Jul 9, 2014 at 9:50 AM, Chris Nauroth cnaur...@hortonworks.com
wrote:

 Hi Giri,

 Is pkgconfig deployed on the new Jenkins slaves?  I noticed this build
 failed:

 https://builds.apache.org/job/PreCommit-HADOOP-Build/4237/

 Looking in the console output, it appears the HDFS native code failed to
 build due to missing pkgconfig.

  [exec] CMake Error at
 /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108
 (message):
  [exec]   Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Wed, Jul 9, 2014 at 7:08 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 Build jobs are now configured to run on the newer set of slaves.



 -giri


 On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan 
 gkesa...@hortonworks.com
  wrote:

  All
 
  Yahoo is in the process of retiring all the hadoop jenkins build slaves,
  *hadoop[1-9]* and

  replace them with a newer set of beefier hosts. These new machines are
  configured
  with *ubuntu-14.04*.

 
  Over the next couple of days I will be configuring the build jobs to run
  on these newly
  configured build slaves.  To automate the installation of tools and
 build
  libraries I have
  put together ansible scripts and here is the link to the toolchain repo.
 
 
  *https://github.com/apache/toolchain 
 https://github.com/apache/toolchain

  *
 
  During the transition, the old build slave will be accessible, and
  expected to be shutdown by 07/15.
 
  I will send out an update later this week when this transition is
  complete.
 
  *Mean while, I would like to request the project owners to
 remove/cleanup
  any stale *
  *jenkins job for their respective project and help with any builds issue
  to make this *
  *transition seamless. *
 
  Thanks
 
  -
  Giri
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.




-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Jenkins Build Slaves

2014-07-07 Thread Giridharan Kesavan
All

Yahoo is in the process of retiring all the hadoop jenkins build slaves,
*hadoop[1-9]* and
replace them with a newer set of beefier hosts. These new machines are
configured
with *ubuntu-14.04*.

Over the next couple of days I will be configuring the build jobs to run on
these newly
configured build slaves.  To automate the installation of tools and build
libraries I have
put together ansible scripts and here is the link to the toolchain repo.


*https://github.com/apache/toolchain https://github.com/apache/toolchain*

During the transition, the old build slave will be accessible, and
expected to be shutdown by 07/15.

I will send out an update later this week when this transition is complete.

*Mean while, I would like to request the project owners to remove/cleanup
any stale *
*jenkins job for their respective project and help with any builds issue to
make this *
*transition seamless. *

Thanks

-
Giri

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [VOTE] Change by-laws on release votes: 5 days instead of 7

2014-06-25 Thread Giridharan Kesavan
+1

-giri


On Wed, Jun 25, 2014 at 12:02 PM, Arpit Agarwal aagar...@hortonworks.com
wrote:

 +1 Arpit


 On Tue, Jun 24, 2014 at 1:53 AM, Arun C Murthy a...@hortonworks.com
 wrote:

  Folks,
 
   As discussed, I'd like to call a vote on changing our by-laws to change
  release votes from 7 days to 5.
 
   I've attached the change to by-laws I'm proposing.
 
   Please vote, the vote will the usual period of 7 days.
 
  thanks,
  Arun
 
  
 
  [main]$ svn diff
  Index: author/src/documentation/content/xdocs/bylaws.xml
  ===
  --- author/src/documentation/content/xdocs/bylaws.xml   (revision
 1605015)
  +++ author/src/documentation/content/xdocs/bylaws.xml   (working copy)
  @@ -344,7 +344,16 @@
   pVotes are open for a period of 7 days to allow all active
   voters time to consider the vote. Votes relating to code
   changes are not subject to a strict timetable but should be
  -made as timely as possible./p/li
  +made as timely as possible./p
  +
  + ul
  + li strongProduct Release - Vote Timeframe/strong
  +   pRelease votes, alone, run for a period of 5 days. All
 other
  + votes are subject to the above timeframe of 7 days./p
  + /li
  +   /ul
  +   /li
  +
  /ul
  /section
   /body
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


pre-commit admin is fixed.

2013-08-25 Thread Giridharan Kesavan
Pre-commit Admin job on jenkins is fixed and back online

-Giri

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845

2013-08-12 Thread Giridharan Kesavan
Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the default path.
If we still need 2.4 I may have to install it. Let me know

-Giri


On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur t...@cloudera.comwrote:

 thanks giri, how do we set 2.4 or 2.5., what is the path to both so we can
 use and env to set it in the jobs?

 thx

 Alejandro
 (phone typing)

 On Aug 9, 2013, at 23:10, Giridharan Kesavan gkesa...@hortonworks.com
 wrote:

  build slaves hadoop1-hadoop9 now has libprotoc 2.5.0
 
 
 
  -Giri
 
 
  On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan 
  gkesa...@hortonworks.com wrote:
 
  Alejandro,
 
  I'm upgrading protobuf on slaves hadoop1-hadoop9.
 
  -Giri
 
 
  On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur t...@cloudera.com
 wrote:
 
  pinging again, I need help from somebody with sudo access to the hadoop
  jenkins boxes to do this or to get sudo access for a couple of hours to
  set
  up myself.
 
  Please!!!
 
  thx
 
 
  On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur t...@cloudera.com
  wrote:
 
  To move forward with this we need protoc 2.5.0 in the apache hadoop
  jenkins boxes.
 
  Who can help with this? I assume somebody at Y!, right?
 
  Thx
 
 
  On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark ecl...@apache.org
  wrote:
 
  In HBase land we've pretty well discovered that we'll need to have
 the
  same version of protobuf that the HDFS/Yarn/MR servers are running.
  That is to say there are issues with ever having 2.4.x and 2.5.x on
  the same class path.
 
  Upgrading to 2.5.x would be great, as it brings some new classes we
  could use.  With that said HBase is getting pretty close to a rather
  large release (0.96.0 aka The Singularity) so getting this in sooner
  rather than later would be great.  If we could get this into 2.1.0 it
  would be great as that would allow us to have a pretty easy story to
  users with regards to protobuf version.
 
  On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee kih...@yahoo-inc.com
  wrote:
  Sorry to hijack the thread but, I also wanted to mention Avro. See
  HADOOP-9672.
  The version we are using has memory leak and inefficiency issues.
  We've
  seen users running into it.
 
  Kihwal
 
 
  
  From: Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
  To: common-dev@hadoop.apache.org common-dev@hadoop.apache.org
  Cc: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org; 
  yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org; 
  mapreduce-...@hadoop.apache.org mapreduce-...@hadoop.apache.org
  Sent: Thursday, August 8, 2013 1:59 AM
  Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release,
  HADOOP-9845
 
 
  Hi,
 
  About Hadoop, Harsh is dealing with this problem in HADOOP-9346.
  For more detail, please see the JIRA ticket:
  https://issues.apache.org/jira/browse/HADOOP-9346
 
  - Tsuyoshi
 
  On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur 
  t...@cloudera.com
  wrote:
  I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
 
  As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits
  to
  justify the upgrade.
 
  Doing the upgrade now, with the first beta, will make things easier
  for
  downstream projects (like HBase) using protobuf and adopting Hadoop
  2.
  If
  we do the upgrade later, downstream projects will have to support 2
  different versions and they my get in nasty waters due to classpath
  issues.
 
  I've locally tested the patch in a pseudo deployment of 2.1.0-beta
  branch
  and it works fine (something is broken in trunk in the RPC layer
  YARN-885).
 
  Now, to do this it will require a few things:
 
  * Make sure protobuf 2.5.0 is available in the jenkins box
  * A follow up email to dev@ aliases indicating developers should
  install
  locally protobuf 2.5.0
 
  Thanks.
 
  --
  Alejandro
 
 
 
  --
  Alejandro
 
 
 
  --
  Alejandro
 
 



Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845

2013-08-12 Thread Giridharan Kesavan
I can take care of re-installing 2.4 and installing 2.5 in a different
location. This would fix 2.0 branch builds as well.
Thoughts?

-Giri


On Mon, Aug 12, 2013 at 11:37 AM, Alejandro Abdelnur t...@cloudera.comwrote:

 Giri,

 first of all, thanks for installing protoc 2.5.0.

 I didn't know we were installing them as the only version and not driven by
 env/path settings.

 Now we have a bit of a problem, precommit builds are broken because of
 mismatch of protoc (2.5.0) and protobuf JAR( 2.4.1).

 We have to options:

 1* commit HADOOP-9845 that will bring protobuf to 2.5.0 and iron out any
 follow up issues.
 2* reinstall protoc 2.4.1 in the jenkins machines and have 2.4.1 and 2.5.0
 coexisting

 My take would be to commit HADOOP-9845 in trunk, iron out any issues an
 then merge it to the other branches.

 We need to sort this out quickly as precommits are not working.

 I'll wait till 3PM today  for objections to option #1, if none I'll commit
 it to trunk.

 Thanks.

 Alejandro



 On Mon, Aug 12, 2013 at 11:30 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

  Like I said protoc is upgraded from 2.4 to 2.5. 2.5 is in the default
 path.
  If we still need 2.4 I may have to install it. Let me know
 
  -Giri
 
 
  On Sat, Aug 10, 2013 at 7:01 AM, Alejandro Abdelnur t...@cloudera.com
  wrote:
 
   thanks giri, how do we set 2.4 or 2.5., what is the path to both so we
  can
   use and env to set it in the jobs?
  
   thx
  
   Alejandro
   (phone typing)
  
   On Aug 9, 2013, at 23:10, Giridharan Kesavan gkesa...@hortonworks.com
 
   wrote:
  
build slaves hadoop1-hadoop9 now has libprotoc 2.5.0
   
   
   
-Giri
   
   
On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan 
gkesa...@hortonworks.com wrote:
   
Alejandro,
   
I'm upgrading protobuf on slaves hadoop1-hadoop9.
   
-Giri
   
   
On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur 
 t...@cloudera.com
   wrote:
   
pinging again, I need help from somebody with sudo access to the
  hadoop
jenkins boxes to do this or to get sudo access for a couple of
 hours
  to
set
up myself.
   
Please!!!
   
thx
   
   
On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur 
  t...@cloudera.com
wrote:
   
To move forward with this we need protoc 2.5.0 in the apache
 hadoop
jenkins boxes.
   
Who can help with this? I assume somebody at Y!, right?
   
Thx
   
   
On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark ecl...@apache.org
wrote:
   
In HBase land we've pretty well discovered that we'll need to
 have
   the
same version of protobuf that the HDFS/Yarn/MR servers are
 running.
That is to say there are issues with ever having 2.4.x and 2.5.x
 on
the same class path.
   
Upgrading to 2.5.x would be great, as it brings some new classes
 we
could use.  With that said HBase is getting pretty close to a
  rather
large release (0.96.0 aka The Singularity) so getting this in
  sooner
rather than later would be great.  If we could get this into
 2.1.0
  it
would be great as that would allow us to have a pretty easy story
  to
users with regards to protobuf version.
   
On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee kih...@yahoo-inc.com
 
wrote:
Sorry to hijack the thread but, I also wanted to mention Avro.
 See
HADOOP-9672.
The version we are using has memory leak and inefficiency
 issues.
We've
seen users running into it.
   
Kihwal
   
   

From: Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
To: common-dev@hadoop.apache.org 
 common-dev@hadoop.apache.org
Cc: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org;
 
yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org 
 mapreduce-...@hadoop.apache.org
Sent: Thursday, August 8, 2013 1:59 AM
Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release,
HADOOP-9845
   
   
Hi,
   
About Hadoop, Harsh is dealing with this problem in HADOOP-9346.
For more detail, please see the JIRA ticket:
https://issues.apache.org/jira/browse/HADOOP-9346
   
- Tsuyoshi
   
On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur 
t...@cloudera.com
wrote:
I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
   
As mentioned in HADOOP-9845, Protobuf 2.5 has significant
  benefits
to
justify the upgrade.
   
Doing the upgrade now, with the first beta, will make things
  easier
for
downstream projects (like HBase) using protobuf and adopting
  Hadoop
2.
If
we do the upgrade later, downstream projects will have to
  support 2
different versions and they my get in nasty waters due to
  classpath
issues.
   
I've locally tested the patch in a pseudo deployment of
  2.1.0-beta
branch
and it works fine (something is broken in trunk in the RPC
 layer
YARN-885).
   
Now, to do this it will require

Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845

2013-08-09 Thread Giridharan Kesavan
Alejandro,

I'm upgrading protobuf on slaves hadoop1-hadoop9.

-Giri


On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur t...@cloudera.comwrote:

 pinging again, I need help from somebody with sudo access to the hadoop
 jenkins boxes to do this or to get sudo access for a couple of hours to set
 up myself.

 Please!!!

 thx


 On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur t...@cloudera.com
 wrote:

  To move forward with this we need protoc 2.5.0 in the apache hadoop
  jenkins boxes.
 
  Who can help with this? I assume somebody at Y!, right?
 
  Thx
 
 
  On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark ecl...@apache.org wrote:
 
  In HBase land we've pretty well discovered that we'll need to have the
  same version of protobuf that the HDFS/Yarn/MR servers are running.
  That is to say there are issues with ever having 2.4.x and 2.5.x on
  the same class path.
 
  Upgrading to 2.5.x would be great, as it brings some new classes we
  could use.  With that said HBase is getting pretty close to a rather
  large release (0.96.0 aka The Singularity) so getting this in sooner
  rather than later would be great.  If we could get this into 2.1.0 it
  would be great as that would allow us to have a pretty easy story to
  users with regards to protobuf version.
 
  On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee kih...@yahoo-inc.com
 wrote:
   Sorry to hijack the thread but, I also wanted to mention Avro. See
  HADOOP-9672.
   The version we are using has memory leak and inefficiency issues.
 We've
  seen users running into it.
  
   Kihwal
  
  
   
From: Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
   To: common-dev@hadoop.apache.org common-dev@hadoop.apache.org
   Cc: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org; 
  yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org; 
  mapreduce-...@hadoop.apache.org mapreduce-...@hadoop.apache.org
   Sent: Thursday, August 8, 2013 1:59 AM
   Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release,
  HADOOP-9845
  
  
   Hi,
  
   About Hadoop, Harsh is dealing with this problem in HADOOP-9346.
   For more detail, please see the JIRA ticket:
   https://issues.apache.org/jira/browse/HADOOP-9346
  
   - Tsuyoshi
  
   On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur t...@cloudera.com
 
  wrote:
   I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
  
   As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits to
   justify the upgrade.
  
   Doing the upgrade now, with the first beta, will make things easier
 for
   downstream projects (like HBase) using protobuf and adopting Hadoop
 2.
  If
   we do the upgrade later, downstream projects will have to support 2
   different versions and they my get in nasty waters due to classpath
  issues.
  
   I've locally tested the patch in a pseudo deployment of 2.1.0-beta
  branch
   and it works fine (something is broken in trunk in the RPC layer
  YARN-885).
  
   Now, to do this it will require a few things:
  
   * Make sure protobuf 2.5.0 is available in the jenkins box
   * A follow up email to dev@ aliases indicating developers should
  install
   locally protobuf 2.5.0
  
   Thanks.
  
   --
   Alejandro
 
 
 
 
  --
  Alejandro
 



 --
 Alejandro



[jira] [Created] (HADOOP-9730) fix hadoop.spec to add task-log4j.properties

2013-07-15 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9730:
--

 Summary: fix hadoop.spec to add task-log4j.properties 
 Key: HADOOP-9730
 URL: https://issues.apache.org/jira/browse/HADOOP-9730
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9628) Setup a daily build job for branch-2.1.0-beta

2013-06-07 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-9628.


Resolution: Fixed

jenkins job configured to run daily

https://builds.apache.org/job/Hadoop-branch-2.1-beta/


 Setup a daily build job for branch-2.1.0-beta
 -

 Key: HADOOP-9628
 URL: https://issues.apache.org/jira/browse/HADOOP-9628
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.1.0-beta
Reporter: Hitesh Shah
Assignee: Giridharan Kesavan



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9573) Fix test-patch script to work with the enhanced PreCommit-Admin script.

2013-05-28 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-9573.


   Resolution: Fixed
Fix Version/s: 1.3.0

Committed to branch-1

 Fix test-patch script to work with the enhanced PreCommit-Admin script.
 ---

 Key: HADOOP-9573
 URL: https://issues.apache.org/jira/browse/HADOOP-9573
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 1.0.3
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 1.3.0

 Attachments: 
 0001-Fix-test-patch-scrit-to-work-with-the-enhanced-PreCo.patch, 
 hadoop-9573.patch


 test-patch script currently take the latest available patch for a given jira 
 and performs the test. This jira is to enhance test-patch script to take 
 attachment-id of a patch as an input and perform the tests using that 
 attachment-id

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Problem building branch-2

2013-05-24 Thread Giridharan Kesavan
looks like protobufhttps://protobuf.googlecode.com/files/protobuf-2.4.1.zip
is
missing.

-Giri


On Fri, May 24, 2013 at 11:51 AM, Ralph Castain r...@open-mpi.org wrote:

 Hi folks

 I'm trying to build the head of branch-2 on a CentOS box and hitting a
 rash of errors like the following (all from the protobuf support area):

 [ERROR] Failed to execute goal
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile
 (default-compile) on project hadoop-common: Compilation failure:
 Compilation failure:
 [ERROR]
 /home/common/hadoop/hadoop-common/hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java:[278,37]
 error: cannot find symbol
 [ERROR] symbol:   class Parser
 [ERROR] location: package com.google.protobuf

 Per the BUILDING.txt instructions, I was using a command line of mvn
 install -DskipTests from the top level directory.

 Any suggestions? I assume I must have some path incorrectly set or need to
 build the sub-projects manually in some order, but I'm unsure of the nature
 of the problem.

 Thanks
 Ralph




[jira] [Created] (HADOOP-9592) libhdfs append test fails

2013-05-22 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9592:
--

 Summary: libhdfs append test fails
 Key: HADOOP-9592
 URL: https://issues.apache.org/jira/browse/HADOOP-9592
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Giridharan Kesavan




 [exec] Wrote 6 bytes
 [exec] Flushed /tmp/appends successfully!
 [exec] Exception in thread main org.apache.hadoop.ipc.RemoteException: 
java.io.IOException: Append is not supported. Please see the dfs.support.append 
configuration parameter
 [exec] at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1781)
 [exec] at 
org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:725)
 [exec] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [exec] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 [exec] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 [exec] at java.lang.reflect.Method.invoke(Method.java:597)
 [exec] at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
 [exec] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
 [exec] at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
 [exec] at java.security.AccessController.doPrivileged(Native Method)
 [exec] at javax.security.auth.Subject.doAs(Subject.java:396)
 [exec] at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
 [exec] at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)
 [exec] 
 [exec] at org.apache.hadoop.ipc.Client.call(Client.java:1107)
 [exec] at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
 [exec] at $Proxy1.append(Unknown Source)
 [exec] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 [exec] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 [exec] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 [exec] at java.lang.reflect.Method.invoke(Method.java:597)
 [exec] at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
 [exec] at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
 [exec] at $Proxy1.append(Unknown Source)
 [exec] at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:933)
 [exec] at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:922)
 [exec] at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:196)
 [exec] at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:650)
 [exec] Call to 
org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apache/hadoop/fs/FSDataOutputStream;)
 failed!
 [exec] Failed to open /tmp/appends for writing!
 [exec] Warning: $HADOOP_HOME is deprecated.
 [exec] 




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9584) fix findbugs warnings

2013-05-21 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9584:
--

 Summary: fix findbugs warnings
 Key: HADOOP-9584
 URL: https://issues.apache.org/jira/browse/HADOOP-9584
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Giridharan Kesavan


https://builds.apache.org/job/Hadoop-branch1/94/findbugsResult/

this url shows about 203 findbugs warnings. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9585) unit test failure :org.apache.hadoop.fs.TestFsShellReturnCode.testChgrp

2013-05-21 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9585:
--

 Summary: unit test failure 
:org.apache.hadoop.fs.TestFsShellReturnCode.testChgrp
 Key: HADOOP-9585
 URL: https://issues.apache.org/jira/browse/HADOOP-9585
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.3.0
 Environment: 
https://builds.apache.org/job/Hadoop-branch1/lastCompletedBuild/testReport/org.apache.hadoop.fs/TestFsShellReturnCode/testChgrp/
Reporter: Giridharan Kesavan



Standard Error

chmod: could not get status for 
'/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChmod/fileDoesNotExist':
 File 
/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChmod/fileDoesNotExist
 does not exist.
chmod: could not get status for 
'/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChmod/nonExistingfiles*'
chown: could not get status for 
'/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChown/fileDoesNotExist':
 File 
/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChown/fileDoesNotExist
 does not exist.
chown: could not get status for 
'/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChown/nonExistingfiles*'
chgrp: failed on 
'file:/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChgrp/fileExists':
 chgrp: changing group of 
`/home/jenkins/jenkins-slave/workspace/Hadoop-branch1/trunk/build/test/data/testChgrp/fileExists':
 Operation not permitted

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9586) unit test failure: org.apache.hadoop.hdfs.TestFileCreation.testFileCreationSetLocalInterface

2013-05-21 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9586:
--

 Summary: unit test failure: 
org.apache.hadoop.hdfs.TestFileCreation.testFileCreationSetLocalInterface
 Key: HADOOP-9586
 URL: https://issues.apache.org/jira/browse/HADOOP-9586
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.3.0
Reporter: Giridharan Kesavan


https://builds.apache.org/job/Hadoop-branch1/lastCompletedBuild/testReport/org.apache.hadoop.hdfs/TestFileCreation/testFileCreationSetLocalInterface/

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/jenkins/filestatus.dat could only be replicated to 0 nodes, instead of 1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at $Proxy5.addBlock(Unknown Source)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at $Proxy5.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3720)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3580)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2783)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3023)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9587) unit test failure: org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality

2013-05-21 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9587:
--

 Summary: unit test failure: 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality
 Key: HADOOP-9587
 URL: https://issues.apache.org/jira/browse/HADOOP-9587
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1.3.0
 Environment: 
https://builds.apache.org/job/Hadoop-branch1/lastCompletedBuild/testReport/org.apache.hadoop.hdfs.server.balancer/TestBalancerWithNodeGroup/testBalancerWithRackLocality/
Reporter: Giridharan Kesavan


Error Message

Rebalancing expected avg utilization to become 0.2, but on datanode 
127.0.0.1:34261 it remains at 0.08 after more than 2 msec.
Stacktrace

at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.waitForBalancer(TestBalancerWithNodeGroup.java:165)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.runBalancer(TestBalancerWithNodeGroup.java:195)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup.testBalancerWithRackLocality(TestBalancerWithNodeGroup.java:297)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [PROPOSAL] change in bylaws to remove Release Plan vote

2013-05-21 Thread Giridharan Kesavan
+1

-Giri


On Tue, May 21, 2013 at 2:10 PM, Matt Foley ma...@apache.org wrote:

 Hi all,
 This has been a side topic in several email threads recently.  Currently we
 have an ambiguity.  We have a tradition in the dev community that any
 committer can create a branch, and propose release candidates from it.  Yet
 the Hadoop bylaws say that releases have to be planned in advance, the plan
 needs to be voted on, and presumably can be denied.

 Apache policies (primarily here http://www.apache.org/dev/release.html
  and here http://www.apache.org/foundation/voting.html, with
 non-normative commentary
 here
 http://incubator.apache.org/guides/releasemanagement.html#best-practice)
 are very clear on how Releases have to be approved, and our bylaws are
 consistent with those policies.  But Apache policies don't say anything
 I've found about Release Plans, nor about voting on Release Plans.

 I propose the following change, to remove Release Plan votes, and give a
 simple definition of Release Manager role.  I'm opening discussion with
 this proposal, and will put it to a vote if we seem to be getting
 consensus.  Here's the changes I suggest in the
 Bylawshttp://hadoop.apache.org/bylaws.html
  document:

 ===

 1. In the Decision Making : Actions section of the Bylaws, the
 following text is removed:

 ** Release Plan*

 Defines the timetable and actions for a release. The plan also nominates a
 Release Manager.

 Lazy majority of active committers


 2. In the Roles and Responsibilities section of the Bylaws, an additional
 role is defined:

 ** Release Manager*

 A Release Manager (RM) is a committer who volunteers to produce a Release
 Candidate according to
 HowToReleasehttps://wiki.apache.org/hadoop/HowToRelease.
  The RM shall publish a Release Plan on the *common-dev@* list stating the
 branch from which they intend to make a Release Candidate, at least one
 week before they do so. The RM is responsible for building consensus around
 the content of the Release Candidate, in order to achieve a successful
 Product Release vote.

 ===

 Please share your views.
 Best regards,
 --Matt (long-time release manager)



[jira] [Created] (HADOOP-9572) Enhance Pre-Commit Admin job to test-patch multiple branches

2013-05-17 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9572:
--

 Summary: Enhance Pre-Commit Admin job to test-patch multiple 
branches
 Key: HADOOP-9572
 URL: https://issues.apache.org/jira/browse/HADOOP-9572
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


Currently PreCommit-Admin job supports triggering PreCommit test jobs on trunk 
for a given project. This jira it to enhance the admin job to support running 
test-patch on any branches for a given project based on the uploaded patch 
name. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9573) Fix test-patch script to work with the enhanced PreCommit-Admin script.

2013-05-17 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9573:
--

 Summary: Fix test-patch script to work with the enhanced 
PreCommit-Admin script.
 Key: HADOOP-9573
 URL: https://issues.apache.org/jira/browse/HADOOP-9573
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


test-patch script currently take the latest available patch for a given jira 
and performs the test. This jira is to enhance test-patch script to take 
attachment-id of a patch as an input and perform the tests using that 
attachment-id

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Building branch-1-win, Starting Datanode on Windows

2013-03-04 Thread Giridharan Kesavan
Daniel,

Could you pls check if you have 64 bit vc runtime and dotnet framework
installed?

64 bit vc runtime is critical.

-Giri


On Mon, Mar 4, 2013 at 12:40 PM, Ramya Nimmagadda 
ramya.nimmaga...@microsoft.com wrote:

 Hi Daniel,

 This can happen when data directory permissions are different from
 default (755). Possible reasons could be:

 1. This directory already exists but created with different permissions .
 One scenario I can think of : The directory got created as part of build
 workflow, and the permissions are inherited from parent (C:\Hadoop)
 2. The datanode service tried to create this (if it does not exist) and
 failed while setting permissions using winutils. This case can be
 verified by looking at the DN log for FSShell exceptions

 To further investigate the root cause, it would help to find out who
 exactly  created this directory. As Bikas mentioned, can you go through the
 logs for any exceptions related to permissions?


 Thanks,
 Ramya

 -Original Message-
 From: Bikas Saha [mailto:bi...@hortonworks.com]
 Sent: Monday, March 04, 2013 10:44 AM
 To: common-dev@hadoop.apache.org
 Subject: RE: Building branch-1-win, Starting Datanode on Windows

 I don't think you need to manually set permissions on Hadoop directories.
 Did you build the native windows executable in your build? Do you see any
 exceptions mentioning winutils?

 Bikas

 -Original Message-
 From: Daniel Jones [mailto:daniel.jo...@opencredo.com]
 Sent: Monday, March 04, 2013 6:56 AM
 To: common-dev@hadoop.apache.org
 Subject: Building branch-1-win, Starting Datanode on Windows

 Hi all,

 I've cloned the branch-1-win branch of Hadoop, and successfully managed to
 build it on my Windows 8 machine.
 When trying to start a datanode instance, I get the following error:

 13/03/04 14:42:47 WARN datanode.DataNode: Invalid directory in
 dfs.data.dir: Incorrect permission for C:/hadoop/data, expected:
 rwxr-xr-x, while actual: --rwx
 13/03/04 14:42:47 ERROR datanode.DataNode: All directories in dfs.data.dir
 are invalid.

 The directory exists, and I've set Allow all to Everyone.

 Is this an issue that should exist in the Windows branch? Is there a way
 to massage Windows' file permissions into something the datanode will
 accept without further modifications to the code?

 Many thanks in advance.
 --
 Daniel Jones - Consultant
 Open Credo Ltd - Excellence in Enterprise Application Development

 Mobile: +44 (0)79 8000 9153
 Main: +44 (0)20 3393 8242

 daniel.jo...@opencredo.com
 http://twitter.com/BinaryTweedNet
 www.opencredo.com

 Registered Office: 5-11 Lavington Street, London, SE1 0NZ Registered in
 UK. No 3943999

 If you have received this e-mail in error please accept our apologies,
 destroy it immediately and it would be greatly appreciated if you notified
 the sender.  It is your responsibility to protect your system from viruses
 and any other harmful code or device.  We try to eliminate them from
 e-mails and attachments; but we accept no liability for any that remain.
 We may monitor or access any or all e-mails sent to us.






Re: .m2 repo messed up on hadoop6

2013-01-24 Thread Giridharan Kesavan
I just cleaned the ~/.m2 cache on hadoop6


-Giri


On Thu, Jan 24, 2013 at 1:17 PM, Aaron T. Myers a...@cloudera.com wrote:

 A few pre-commit builds have been failing recently with compile errors
 which I think are due to a bad jar in the /home/jenkins/.m2 repo on
 hadoop6. For example, both of these builds:


 https://builds.apache.org/view/G-L/view/Hadoop/job/PreCommit-HDFS-Build/3878/

 https://builds.apache.org/view/G-L/view/Hadoop/job/PreCommit-HDFS-Build/3879/

 Failed with this error:

 [ERROR] Failed to execute goal
 org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile
 (default-compile) on project hadoop-yarn-api: Compilation failure:
 Compilation failure:
 [ERROR] error: error reading

 /home/jenkins/.m2/repository/org/glassfish/grizzly/grizzly-framework/2.1.1/grizzly-framework-2.1.1.jar;
 error in opening zip file
 [ERROR] error: error reading

 /home/jenkins/.m2/repository/org/glassfish/grizzly/grizzly-rcm/2.1.1/grizzly-rcm-2.1.1.jar;
 error in opening zip file
 [ERROR] error: error reading

 /home/jenkins/.m2/repository/org/glassfish/grizzly/grizzly-framework/2.1.1/grizzly-framework-2.1.1-tests.jar;
 error in opening zip file

 Could someone with access to the build slaves please clear out
 /home/jenkins/.m2 on hadoop6? Alternatively, could I be given access
 to the build slave machines so I can fix issues like this in the
 future myself?

 Thanks a lot.

 --
 Aaron T. Myers
 Software Engineer, Cloudera



Re: Hadoop build slaves software

2013-01-07 Thread Giridharan Kesavan
I did install protoc on hadoop9 and brought it back online after testing it
couple of hours back.


-Giri


On Mon, Jan 7, 2013 at 3:35 PM, Todd Lipcon t...@cloudera.com wrote:

 I'll install the right protoc and libstdc++ dev on asf009 as well.

 -Todd

 On Mon, Jan 7, 2013 at 9:57 AM, Andrew Wang andrew.w...@cloudera.com
 wrote:
  I think hadoop9 has a similar problem as hadoop8, based on a recent
 build.
  The javac output has a compile-proto error:
 
  https://builds.apache.org/job/PreCommit-HDFS-Build/3755/
 
 https://builds.apache.org/job/PreCommit-HDFS-Build/3755/artifact/trunk/patchprocess/trunkJavacWarnings.txt
 
 
  On Sun, Jan 6, 2013 at 1:57 AM, Binglin Chang decst...@gmail.com
 wrote:
 
  HAServiceProtocol.proto:21:8: Option java_generic_services unknown.
  This is probably caused by a older version of protoc in build env.
 
 
  On Sun, Jan 6, 2013 at 2:12 PM, Giridharan Kesavan 
  gkesa...@hortonworks.com
   wrote:
 
   by looking at the failure log :
  
  
 
 https://builds.apache.org/view/Hadoop/job/PreCommit-HADOOP-Build/1950/artifact/trunk/patchprocess/trunkJavacWarnings.txt
   build failed on
  
   [INFO] --- exec-maven-plugin:1.2:exec (compile-proto) @ hadoop-common
 ---
  
   HAServiceProtocol.proto:21:8: Option java_generic_services unknown.
  
   I'm not sure if this is something to do with the build env.
  
   -Giri
  
  
   On Sat, Jan 5, 2013 at 5:57 PM, Binglin Chang decst...@gmail.com
  wrote:
  
I am not sure if this problem is solved, the build still failed in
precommit-HADOOP
https://builds.apache.org/view/Hadoop/job/PreCommit-HADOOP-Build/
   
   
On Sat, Jan 5, 2013 at 6:46 AM, Giridharan Kesavan 
gkesa...@hortonworks.com
 wrote:
   
 Marking the slave offline would do. I 've mared the hadoop8 slave
offline,
 while I test it for builds and bring it back online later when its
   good.


 -Giri


 On Fri, Jan 4, 2013 at 2:26 PM, Todd Lipcon t...@cloudera.com
  wrote:

  Turns out I had to both kill -9 it and chmod 000
  /home/jenkins/jenkins-slave in order to keep it from
  auto-respawning.
  Just a note so that once the toolchain is fixed, someone knows
 to
  re-chmod back to 755.
 
  -Todd
 
  On Fri, Jan 4, 2013 at 2:11 PM, Todd Lipcon t...@cloudera.com
   wrote:
   I'm going to kill -9 the jenkins slave on hadoop8 for now cuz
  it's
   causing havoc on the precommit builds. I can't see another
 way to
   administratively disable it from the Jenkins interface.
  
   Rajiv, Giri -- mind if I build/install protoc into /usr/local
 to
match
   the other slaves? We can continue the conversation about
   provisioning
   after, but would like to unblock the builds in the meantime.
  
   As for CentOS vs Ubuntu, I've got no preference. RHEL6 is
  probably
   preferable since it's a more common install platform, anyway.
  But,
   we'll still need to have a custom toolchain for things like
  protoc
2.4
   which don't have new enough versions in the package repos.
  
   -Todd
  
   On Fri, Jan 4, 2013 at 2:03 PM, Colin McCabe 
   cmcc...@alumni.cmu.edu

  wrote:
   In addition to protoc, can someone please also install a
 32-bit
   C++
  compiler?
  
   The builds are all failing on this machine because of that.
  
   regards,
   Colin
  
  
   On Fri, Jan 4, 2013 at 11:37 AM, Giridharan Kesavan
   gkesa...@hortonworks.com wrote:
   When I configured the other machines I used the source to
  compile
and
   install the protoc, as the 2.4.1 wasn't available in the
 ubuntu
repo.
  
   BTW installed 2.4.1 on asf008.
   gkesavan@asf008:~$ protoc --version
   libprotoc 2.4.1
  
  
   -Giri
  
  
   On Thu, Jan 3, 2013 at 11:24 PM, Todd Lipcon 
  t...@cloudera.com
  wrote:
  
   Hey folks,
  
   It looks like hadoop8 has recently come back online as a
 build
 slave,
   but is failing all the builds because it has an ancient
  version
   of
   protobuf (2.2.0):
   todd@asf008:~$ protoc  --version
   libprotoc 2.2.0
  
   In contrast, other slaves have 2.4.1:
   todd@asf001:~$ protoc --version
   libprotoc 2.4.1
  
   asf001 has the newer protoc in /usr/local/bin but asf008
 does
   not.
   Does anyone know how software is meant to be deployed on
 these
build
   slaves? I'm happy to download and install protobuf 2.4.1
 into
   /usr/local on asf008 if manual installation is the name of
 the
game,
   but it seems like we should be doing something a little
 more
   reproducible than one-off builds by rando developers to
 manage
   our
   toolchain on the Jenkins slaves.
  
   -Todd
   --
   Todd Lipcon
   Software Engineer, Cloudera
  
  
  
  
   --
   Todd Lipcon
   Software

Re: Hadoop build slaves software

2013-01-04 Thread Giridharan Kesavan
I think I installed protoc in /usr/local and this is what I see

gkesavan@asf008:~$ which protoc
/usr/local/bin/protoc


-Giri


On Fri, Jan 4, 2013 at 2:11 PM, Todd Lipcon t...@cloudera.com wrote:

 I'm going to kill -9 the jenkins slave on hadoop8 for now cuz it's
 causing havoc on the precommit builds. I can't see another way to
 administratively disable it from the Jenkins interface.

 Rajiv, Giri -- mind if I build/install protoc into /usr/local to match
 the other slaves? We can continue the conversation about provisioning
 after, but would like to unblock the builds in the meantime.

 As for CentOS vs Ubuntu, I've got no preference. RHEL6 is probably
 preferable since it's a more common install platform, anyway. But,
 we'll still need to have a custom toolchain for things like protoc 2.4
 which don't have new enough versions in the package repos.

 -Todd

 On Fri, Jan 4, 2013 at 2:03 PM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:
  In addition to protoc, can someone please also install a 32-bit C++
 compiler?
 
  The builds are all failing on this machine because of that.
 
  regards,
  Colin
 
 
  On Fri, Jan 4, 2013 at 11:37 AM, Giridharan Kesavan
  gkesa...@hortonworks.com wrote:
  When I configured the other machines I used the source to compile and
  install the protoc, as the 2.4.1 wasn't available in the ubuntu repo.
 
  BTW installed 2.4.1 on asf008.
  gkesavan@asf008:~$ protoc --version
  libprotoc 2.4.1
 
 
  -Giri
 
 
  On Thu, Jan 3, 2013 at 11:24 PM, Todd Lipcon t...@cloudera.com wrote:
 
  Hey folks,
 
  It looks like hadoop8 has recently come back online as a build slave,
  but is failing all the builds because it has an ancient version of
  protobuf (2.2.0):
  todd@asf008:~$ protoc  --version
  libprotoc 2.2.0
 
  In contrast, other slaves have 2.4.1:
  todd@asf001:~$ protoc --version
  libprotoc 2.4.1
 
  asf001 has the newer protoc in /usr/local/bin but asf008 does not.
  Does anyone know how software is meant to be deployed on these build
  slaves? I'm happy to download and install protobuf 2.4.1 into
  /usr/local on asf008 if manual installation is the name of the game,
  but it seems like we should be doing something a little more
  reproducible than one-off builds by rando developers to manage our
  toolchain on the Jenkins slaves.
 
  -Todd
  --
  Todd Lipcon
  Software Engineer, Cloudera
 



 --
 Todd Lipcon
 Software Engineer, Cloudera



Re: Hadoop build slaves software

2013-01-04 Thread Giridharan Kesavan
Marking the slave offline would do. I 've mared the hadoop8 slave offline,
while I test it for builds and bring it back online later when its good.


-Giri


On Fri, Jan 4, 2013 at 2:26 PM, Todd Lipcon t...@cloudera.com wrote:

 Turns out I had to both kill -9 it and chmod 000
 /home/jenkins/jenkins-slave in order to keep it from auto-respawning.
 Just a note so that once the toolchain is fixed, someone knows to
 re-chmod back to 755.

 -Todd

 On Fri, Jan 4, 2013 at 2:11 PM, Todd Lipcon t...@cloudera.com wrote:
  I'm going to kill -9 the jenkins slave on hadoop8 for now cuz it's
  causing havoc on the precommit builds. I can't see another way to
  administratively disable it from the Jenkins interface.
 
  Rajiv, Giri -- mind if I build/install protoc into /usr/local to match
  the other slaves? We can continue the conversation about provisioning
  after, but would like to unblock the builds in the meantime.
 
  As for CentOS vs Ubuntu, I've got no preference. RHEL6 is probably
  preferable since it's a more common install platform, anyway. But,
  we'll still need to have a custom toolchain for things like protoc 2.4
  which don't have new enough versions in the package repos.
 
  -Todd
 
  On Fri, Jan 4, 2013 at 2:03 PM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:
  In addition to protoc, can someone please also install a 32-bit C++
 compiler?
 
  The builds are all failing on this machine because of that.
 
  regards,
  Colin
 
 
  On Fri, Jan 4, 2013 at 11:37 AM, Giridharan Kesavan
  gkesa...@hortonworks.com wrote:
  When I configured the other machines I used the source to compile and
  install the protoc, as the 2.4.1 wasn't available in the ubuntu repo.
 
  BTW installed 2.4.1 on asf008.
  gkesavan@asf008:~$ protoc --version
  libprotoc 2.4.1
 
 
  -Giri
 
 
  On Thu, Jan 3, 2013 at 11:24 PM, Todd Lipcon t...@cloudera.com
 wrote:
 
  Hey folks,
 
  It looks like hadoop8 has recently come back online as a build slave,
  but is failing all the builds because it has an ancient version of
  protobuf (2.2.0):
  todd@asf008:~$ protoc  --version
  libprotoc 2.2.0
 
  In contrast, other slaves have 2.4.1:
  todd@asf001:~$ protoc --version
  libprotoc 2.4.1
 
  asf001 has the newer protoc in /usr/local/bin but asf008 does not.
  Does anyone know how software is meant to be deployed on these build
  slaves? I'm happy to download and install protobuf 2.4.1 into
  /usr/local on asf008 if manual installation is the name of the game,
  but it seems like we should be doing something a little more
  reproducible than one-off builds by rando developers to manage our
  toolchain on the Jenkins slaves.
 
  -Todd
  --
  Todd Lipcon
  Software Engineer, Cloudera
 
 
 
 
  --
  Todd Lipcon
  Software Engineer, Cloudera



 --
 Todd Lipcon
 Software Engineer, Cloudera



Re: [VOTE] introduce Python as build-time and run-time dependency for Hadoop and throughout Hadoop stack

2012-11-26 Thread Giridharan Kesavan
+1, +1, +1

-Giri


On Sat, Nov 24, 2012 at 12:13 PM, Matt Foley ma...@apache.org wrote:

 For discussion, please see previous thread [PROPOSAL] introduce Python as
 build-time and run-time dependency for Hadoop and throughout Hadoop stack.

 This vote consists of three separate items:

 1. Contributors shall be allowed to use Python as a platform-independent
 scripting language for build-time tasks, and add Python as a build-time
 dependency.
 Please vote +1, 0, -1.

 2. Contributors shall be encouraged to use Maven tasks in combination with
 either plug-ins or Groovy scripts to do cross-platform build-time tasks,
 even under ant in Hadoop-1.
 Please vote +1, 0, -1.

 3. Contributors shall be allowed to use Python as a platform-independent
 scripting language for run-time tasks, and add Python as a run-time
 dependency.
 Please vote +1, 0, -1.

 Note that voting -1 on #1 and +1 on #2 essentially REQUIRES contributors to
 use Maven plug-ins or Groovy as the only means of cross-platform build-time
 tasks, or to simply continue using platform-dependent scripts as is being
 done today.

 Vote closes at 12:30pm PST on Saturday 1 December.
 -
 Personally, my vote is +1, +1, +1.
 I think #2 is preferable to #1, but still has many unknowns in it, and
 until those are worked out I don't want to delay moving to cross-platform
 scripts for build-time tasks.

 Best regards,
 --Matt



[jira] [Resolved] (HADOOP-9055) POM files for hadoop-core 1.x should depend on Jackson 1.8.8

2012-11-20 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-9055.


Resolution: Duplicate

duplicate of HADOOP-8745

 POM files for hadoop-core 1.x should depend on Jackson 1.8.8
 

 Key: HADOOP-9055
 URL: https://issues.apache.org/jira/browse/HADOOP-9055
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0
Reporter: Gunther Hagleitner
Assignee: Giridharan Kesavan

 According to this: https://issues.apache.org/jira/browse/HADOOP-7470
 Jackson has been upgraded to 1.8.8, but the POMs on the apache maven repo for 
 the hadoop 1.x line still specify 1.0.1 for the library. That's causing build 
 problems for hive (which uses 1.0.0 to build its 20S shim).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9071) configure ivy log levels for resolve/retrieve

2012-11-20 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9071:
--

 Summary: configure ivy log levels for resolve/retrieve
 Key: HADOOP-9071
 URL: https://issues.apache.org/jira/browse/HADOOP-9071
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.1.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: building hadoop under windows

2012-11-19 Thread Giridharan Kesavan
saveVersion.py is made available with
https://issues.apache.org/jira/browse/HADOOP-8924

-Giri


-- Forwarded message --
From: Radim Kolar h...@filez.com
Date: Mon, Nov 19, 2012 at 7:44 AM
Subject: building hadoop under windows
To: common-dev@hadoop.apache.org


saveVersion.sh prevents hadoop from building on windows

what about to rewrite this script into groovy and run it with maven groovy
plugin?


[jira] [Created] (HADOOP-9040) build task-controller binary unless windows

2012-11-14 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9040:
--

 Summary: build task-controller binary unless windows
 Key: HADOOP-9040
 URL: https://issues.apache.org/jira/browse/HADOOP-9040
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9017) fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for version

2012-11-12 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-9017.


Resolution: Fixed

Thanks Matt, committed to branch-1.1 and merged to branch-1

 fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for 
 version 
 --

 Key: HADOOP-9017
 URL: https://issues.apache.org/jira/browse/HADOOP-9017
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.4, 1.1.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 1.1.1, 1.2.0

 Attachments: HADOOP-9017.patch


 hadoop-client-pom-template.xml and hadoop-client-pom-template.xml references 
 to project.version variable, instead they should refer to @version token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9017) fix hadoop-client-pom-template.xml and hadoop-client-pom-template.xml for version

2012-11-07 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-9017:
--

 Summary: fix hadoop-client-pom-template.xml and 
hadoop-client-pom-template.xml for version 
 Key: HADOOP-9017
 URL: https://issues.apache.org/jira/browse/HADOOP-9017
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.1.0, 1.0.4
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


hadoop-client-pom-template.xml and hadoop-client-pom-template.xml references to 
project.version variable, instead they should refer to @version token.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8880) Missing jersey jars as dependency in the pom causes hive tests to fail

2012-10-03 Thread Giridharan Kesavan (JIRA)
Giridharan Kesavan created HADOOP-8880:
--

 Summary: Missing jersey jars as dependency in the pom causes hive 
tests to fail
 Key: HADOOP-8880
 URL: https://issues.apache.org/jira/browse/HADOOP-8880
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


ivy.xml has the dependency included where as the same dependency is not updated 
in the pom template.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Supporting cross-project Jenkins builds

2012-04-17 Thread Giridharan Kesavan
I agree with Aaron. Its going increase the test patch build timings
significantly which may not be very helpful

Im  -1 on this.

-Giri



On Mon, Apr 16, 2012 at 2:22 PM, Aaron T. Myers a...@cloudera.com wrote:
 On Mon, Apr 16, 2012 at 2:14 PM, Alejandro Abdelnur t...@cloudera.comwrote:

 * all testcases should always be run (else a change in hdfs could
 affect yarn/tools but not be detected, or one in yarn affect tools)


 I'm -0 on this suggestion. Yes, it's a nice benefit to check all of the
 dependent Hadoop sub-projects for every patch, but it will also
 dramatically increase the time test-patch takes to run for any given patch.
 In my experience, the vast majority of patches stand little chance of
 breaking the dependent sub-projects, making this largely unnecessary and
 thus a waste of time and Jenkins build slave resources.

 --
 Aaron T. Myers
 Software Engineer, Cloudera


Re: Supporting cross-project Jenkins builds

2012-04-17 Thread Giridharan Kesavan
Alejandro,

On Tue, Apr 17, 2012 at 4:52 PM, Alejandro Abdelnur t...@cloudera.com wrote:
 Giri,

 I agree that running ALL tests all time takes a lot of time
 (personally I'd prefer we do this at the penalty of longer runs).

 Still we have a problem to solve, we need to find a solution on
 test-patch working for ALL maven modules, currently changes outside of
 common/hdfs/mapred or cross-projects test-patch does not work.

 So, how about the following approach:

 * All patches must be at trunk/ level
 * All patches do a full clean TARBALL creation without running testcases
 * From the patch file we find out the maven modules and for those
 modules we do javac-warns/javadoc-warns/findbugs/testcases

I like this approach of doing a clean tarball.
and doing the other checks ( javac warnings, javadoc warnings, findbug
warnings and release audit.)
for that specific module.


 This would speed up test-patch runs and together with a nightly
 jenkins jobs running ALL testcases would give a complete coverage.


test-patch and nightly jenkins jobs running ALL testcase?
could you pls explain this?

 Does this seem reasonable?

 Thxs.

 Alejandro

 On Tue, Apr 17, 2012 at 3:31 PM, Tom White t...@cloudera.com wrote:
 Giri,

 I think Aaron was talking about not running all test cases for changes
 to any project (e.g. HDFS and MapReduce). My proposal was to run all
 the tests for any Common change. An HDFS change would only run HDFS
 tests, and any MapReduce change would only run MapReduce tests.

 Another thing I didn't mention was that currently Jenkins doesn't run
 tests or apply patches for any changes in hadoop-tools, which would be
 fixed by the change I'm suggesting.

 Tom

 On Tue, Apr 17, 2012 at 3:17 PM, Giridharan Kesavan
 gkesa...@hortonworks.com wrote:
 I agree with Aaron. Its going increase the test patch build timings
 significantly which may not be very helpful

 Im  -1 on this.

 -Giri



 On Mon, Apr 16, 2012 at 2:22 PM, Aaron T. Myers a...@cloudera.com wrote:
 On Mon, Apr 16, 2012 at 2:14 PM, Alejandro Abdelnur 
 t...@cloudera.comwrote:

 * all testcases should always be run (else a change in hdfs could
 affect yarn/tools but not be detected, or one in yarn affect tools)


 I'm -0 on this suggestion. Yes, it's a nice benefit to check all of the
 dependent Hadoop sub-projects for every patch, but it will also
 dramatically increase the time test-patch takes to run for any given patch.
 In my experience, the vast majority of patches stand little chance of
 breaking the dependent sub-projects, making this largely unnecessary and
 thus a waste of time and Jenkins build slave resources.

 --
 Aaron T. Myers
 Software Engineer, Cloudera



 --
 Alejandro


Re: Unblocked Precommit build

2012-03-23 Thread Giridharan Kesavan
thanks for taking care of this Todd.
-Giri



On Fri, Mar 23, 2012 at 3:22 PM, Todd Lipcon t...@cloudera.com wrote:
 Recently a lot of the precommit builds haven't been triggering. I just
 figured out the reason why: the Precommit-Admin build uses a saved
 JIRA filter to look up the last 300 Patch Available tickets across
 all of the projects for which the precommit bot runs. But, that filter
 was set to order by key desc instead of order by updated desc.
 Since the ZooKeeper, MapReduce, etc projects have gotten a lot of
 patch available recently, the latter half of the issues under HADOOP
 fell below the paging threshold on the saved filter.

 I made a new copy of the filter, ordered by updated DESC instead, and
 switched over the Precommit-Admin to point to it. I triggered a new
 Precommit-Admin build and it looks like it fired off a bunch of
 precommits.

 You should expect a bunch of precommit results in the next few hours
 as it chugs through the backlog, and then back to normal.

 -Todd
 --
 Todd Lipcon
 Software Engineer, Cloudera


[jira] [Created] (HADOOP-8201) create the configure script for native compilation as part of the build

2012-03-22 Thread Giridharan Kesavan (Created) (JIRA)
create the configure script for native compilation as part of the build
---

 Key: HADOOP-8201
 URL: https://issues.apache.org/jira/browse/HADOOP-8201
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.1, 1.0.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


configure script is checked into svn and its not regenerated during build. 
Ideally configure scritp should not be checked into svn and generated during 
build using autoreconf.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8090) rename hadoop 64 bit rpm/deb package name

2012-02-17 Thread Giridharan Kesavan (Created) (JIRA)
rename hadoop 64 bit rpm/deb package name
-

 Key: HADOOP-8090
 URL: https://issues.apache.org/jira/browse/HADOOP-8090
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


change hadoop rpm/deb name from hadoop-version.amd64.rpm/deb 
hadoop-version.x86_64.rpm/deb   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Apache Jenkins is down

2012-02-13 Thread Giridharan Kesavan
Please use bui...@apache.org mailing list for Apache Jenkins. That is
where the apache Jenkins admins live.
-Giri

On Sun, Feb 12, 2012 at 12:33 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
 Hello Everyone,

 Does anyone know why Apache Jenkins has been down for about a day? We rely
 on the continuous integration server extensively to test HBase patches. Any
 help with this would be appreciated.

 Sincerely,
 --Mikhail



-- 
-Giri


Re: Hadoop QA machine configuration / getting a Jenkins account

2012-02-10 Thread Giridharan Kesavan
Mikhali,

gkesavan@minerva:~$ free -g
 total   used   free sharedbuffers cached
Mem: 7  3  4  0  0  1
-/+ buffers/cache:   1  5
Swap:   22 0 21

On Fri, Feb 10, 2012 at 1:31 PM, Mikhail Bautin
bautin.mailing.li...@gmail.com wrote:
 Hi Giri,

 Thank you for your prompt response. I am trying to debug recent HBase unit
 test failures that only appear in Hadoop QA and are not easily reproducible
 locally, such as the following:

 https://builds.apache.org/job/HBase-TRUNK/2658/testReport/

 This is not Java heap memory which is configurable in HBase's pom.xml but
 native memory, suggesting that the machine might be rather memory
 constrained (e.g. 2 GB or so).

 Thanks!
 --Mikhail

 On Fri, Feb 10, 2012 at 1:25 PM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 Mikhali,

 I can get you the memory details, May I know the reason please?

 Giri
 Apache Hudson Admin


 On Fri, Feb 10, 2012 at 1:20 PM, Mikhail Bautin
 bautin.mailing.li...@gmail.com wrote:
  Hello Hadoop Developers,
 
  A couple of questions about Hadoop QA:
  (1) Does anyone know what the configuration of Hadoop QA boxes is?
  Specifically, I am interested in the amount of memory available on these
  boxes.
  (2) How to get an account for https://builds.apache.org/?
 
  Thanks,
  --Mikhail



 --
 -Giri




-- 
-Giri


[jira] [Created] (HADOOP-7960) svn revision should be used to verify the version difference between hadoop services

2012-01-05 Thread Giridharan Kesavan (Created) (JIRA)
svn revision should be used to verify the version difference between hadoop 
services


 Key: HADOOP-7960
 URL: https://issues.apache.org/jira/browse/HADOOP-7960
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Giridharan Kesavan


hadoop services should not be using the build timestamp to verify version 
difference in the cluster installation. Instead it should use the svn revision 
or the git hash.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7942) enabling clover coverage reports fails hadoop unit test compilation

2011-12-28 Thread Giridharan Kesavan (Created) (JIRA)
enabling clover coverage reports fails hadoop unit test compilation
---

 Key: HADOOP-7942
 URL: https://issues.apache.org/jira/browse/HADOOP-7942
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 1.1.0
 Environment: 
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-1-Code-Coverage/13/console

Reporter: Giridharan Kesavan


enabling clover reports fails compiling the following junit tests.
link to the console output of jerkins :
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-1-Code-Coverage/13/console



{noformat}
[javac] 
/tmp/clover50695626838999169.tmp/org/apache/hadoop/security/TestUserGroupInformation.java:224:
 cannot find symbol
..
[javac] 
/tmp/clover50695626838999169.tmp/org/apache/hadoop/security/TestUserGroupInformation.java:225:
 cannot find symbol
..

 [javac] 
/tmp/clover50695626838999169.tmp/org/apache/hadoop/security/TestJobCredentials.java:67:
 cannot find symbol
[javac] symbol  : class T 
..
[javac] 
/tmp/clover50695626838999169.tmp/org/apache/hadoop/security/TestJobCredentials.java:68:
 cannot find symbol
[javac] symbol  : class T
.
[javac] 
/tmp/clover50695626838999169.tmp/org/apache/hadoop/fs/TestFileSystem.java:653: 
cannot find symbol
[javac] symbol  : class T
.
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 5 errors
[javac] 63 warnings

{noformat}



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7885) fix datanode dir permission in hadoop-conf-setup.sh

2011-12-05 Thread Giridharan Kesavan (Created) (JIRA)
fix datanode dir permission in hadoop-conf-setup.sh
---

 Key: HADOOP-7885
 URL: https://issues.apache.org/jira/browse/HADOOP-7885
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 1.1.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


datanode dir are created by the hadoop setup script and permission is set to 
700 by default. When the dfs.datanode.data.dir.perm is set to a diff permission 
in the hdfs-site then the dn fails with the the following error;

2011-12-05 23:50:53,579 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Invalid directory in dfs.data.dir: Incorrect permission for 
/var/lib/hadoop/hdfs/datanode, expected: rwxr-x---, while actual: rwx--
2011-12-05 23:50:53,581 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Invalid directory in dfs.data.dir: Incorrect permission for 
/var/lib/hadoop/hdfs/datanode1, expected: rwxr-x---, while actual: rwx--

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Publishing maven snapshots for 0.23?

2011-10-14 Thread giridharan kesavan

Todd,
I like the idea of setting up commit builds for 23 branch; I can set 
this up.

-Giri

On 10/14/11 12:23 PM, Todd Lipcon wrote:

Looks like the Hadoop-*-Commit builds for trunk do a mvn deploy, but
not the 0.23 builds.

It seems we should either (a) add mvn deploy to the
Hadoop-*-0.23-Build targets (which run nightly), or (b) add a
cross-project 0.23-commit build which is triggered on any commit to
0.23 and does a mvn deploy across all the projects, without running
any tests.

Any preferences?

On Fri, Oct 14, 2011 at 12:18 PM, Todd Lipcont...@cloudera.com  wrote:

It seems that we're not publishing maven snapshots correctly. I'm not
entirely sure why, but if you look at:
https://repository.apache.org/content/groups/snapshots/org/apache/hadoop/hadoop-common/
you'll see the latest snapshots are from 9/14 or so.

Anyone have any idea what's going on here?

-Todd
--
Todd Lipcon
Software Engineer, Cloudera







--
-Giri



Re: Publishing maven snapshots for 0.23?

2011-10-14 Thread giridharan kesavan


Okay, I ve done the 0.23 commit setup.
Builds up and running for common/hdfs and Mapred

https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Common-0.23-Commit/
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-0.23-Commit/
https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Mapreduce-0.23-Commit/

-Giri


On 10/14/11 12:53 PM, Todd Lipcon wrote:

On Fri, Oct 14, 2011 at 12:39 PM, giridharan kesavan
gkesa...@hortonworks.com  wrote:

Todd,
I like the idea of setting up commit builds for 23 branch; I can set this
up.

OK. Mind if I manually mvn deploy for now? Some HBase work is blocked on it.

-Todd


On 10/14/11 12:23 PM, Todd Lipcon wrote:

Looks like the Hadoop-*-Commit builds for trunk do a mvn deploy, but
not the 0.23 builds.

It seems we should either (a) add mvn deploy to the
Hadoop-*-0.23-Build targets (which run nightly), or (b) add a
cross-project 0.23-commit build which is triggered on any commit to
0.23 and does a mvn deploy across all the projects, without running
any tests.

Any preferences?

On Fri, Oct 14, 2011 at 12:18 PM, Todd Lipcont...@cloudera.comwrote:

It seems that we're not publishing maven snapshots correctly. I'm not
entirely sure why, but if you look at:

https://repository.apache.org/content/groups/snapshots/org/apache/hadoop/hadoop-common/
you'll see the latest snapshots are from 9/14 or so.

Anyone have any idea what's going on here?

-Todd
--
Todd Lipcon
Software Engineer, Cloudera





--
-Giri








--
-Giri



Re: 0.23 trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar?

2011-10-12 Thread giridharan kesavan

+1 for option 4


On 10/12/11 9:50 AM, Eric Yang wrote:

Option #4 is the most practical use case for making a release.  For bleeding 
edge developers, they would prefer to mix and match different version of hdfs 
and mapreduce.  Hence, it may be good to release the single tarball for 
release, but continue to support component tarballs for developers and rpm/deb 
packaging.  In case, someone wants to run hdfs + hbase, but not mapreduce for 
specialized application.  Component separation tarball should continue to work 
for rpm/deb packaging.

regards,
Eric

On Oct 12, 2011, at 9:30 AM, Prashant Sharma wrote:


I support the idea of having 4 as additional option.

On Wed, Oct 12, 2011 at 9:37 PM, Alejandro Abdelnurt...@cloudera.com  wrote:

Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.

With HADOOP-7642 the stitching happens as part of the build.

The build currently produces the following tars:

1* common TAR
2* hdfs (partial) TAR
3* mapreduce (partial) TAR
4* hadoop (full, the stitched one) TAR

#1 on its own does not run anything, #2 and #3 on their own don't run. #4
runs hdfs  mapreduce.

Questions:

Q1. Does it make sense to publish #1, #2  #3? Or #4 is sufficient and you
start the services you want (i.e. Hbase would just use HDFS)?

Q2. And what about a source TAR, does it make sense to have source TAR per
component or a single TAR for the whole?


For simplicity (for the build system and for users) I'd prefer a single
binary TAR and a single source TAR.

Thanks.

Alejandro




--

Prashant Sharma
Pramati Technologies
Begumpet, Hyderabad.





--
-Giri



[jira] [Created] (HADOOP-7724) proxy user info should go to the core-site.xml

2011-10-06 Thread Giridharan Kesavan (Created) (JIRA)
proxy user info should go to the core-site.xml 
---

 Key: HADOOP-7724
 URL: https://issues.apache.org/jira/browse/HADOOP-7724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Giridharan Kesavan


proxy user info should go to the core-site.xml instead of the hdfs-site.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7691) hadoop deb pkg should take a diff group id

2011-09-28 Thread Giridharan Kesavan (Created) (JIRA)
hadoop deb pkg should take a diff group id
--

 Key: HADOOP-7691
 URL: https://issues.apache.org/jira/browse/HADOOP-7691
 Project: Hadoop Common
  Issue Type: Bug
 Environment: ubuntu - 11.04
Reporter: Giridharan Kesavan


ubuntu - 11.04 is using group id 114 for gdm.
hadoop deb pkg should pickup a different groupid.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7686) update hadoop rpm package to create symlink to libhadoop.so lib

2011-09-27 Thread Giridharan Kesavan (Created) (JIRA)
update hadoop rpm package to create symlink to libhadoop.so lib
---

 Key: HADOOP-7686
 URL: https://issues.apache.org/jira/browse/HADOOP-7686
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0, 0.20.206.0
Reporter: Giridharan Kesavan


rpm installation of hadoop doesnt seem to libhadoop.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7676) add rules to the core-site.xml template

2011-09-23 Thread Giridharan Kesavan (JIRA)
add rules to the core-site.xml template
---

 Key: HADOOP-7676
 URL: https://issues.apache.org/jira/browse/HADOOP-7676
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.205.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


add rules for master and region in core-site.xml template.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Trunk and 0.23 builds.

2011-09-14 Thread Giridharan Kesavan
If you look further down build is also configure to run tests
$MAVEN_HOME/bin/mvn test -Dmaven.test.failure.ignore=true -Pclover
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
 clover.log 21

mvn clean install -DskipTests is run at the root level to get the
latest hdfs dependencies installed in the mvn cache.

and the mvn test is executed inside the hadoop-hdfs-project.

-Giri



On Wed, Sep 14, 2011 at 8:14 AM, Eric Caspole eric.casp...@amd.com wrote:
 I noticed that even the jenkins build does -DskipTests, is this because
 there are too many failures or it simply takes too long?

 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk/799/consoleFull

 /home/jenkins/tools/maven/latest/bin/mvn clean install -DskipTests

 When I tried it over the weekend, the hdfs tests in the normal maven build
 took over 3 hours, is that normal or maybe I have something wrong in my
 setup?
 Thanks for any advice.
 Eric


 On Sep 13, 2011, at 4:41 PM, Mahadev Konar wrote:

 Hi all,
  After quite a bit of help from Giri, I was able to set up nightly builds
 on 0.23 and fix issues on trunk builds (looks like the hdfs trunk build was
 broken for a long time).  Note that I have enabled artifact publishing
 (tarballs) on the nighties for both trunk and 0.23 builds. In case you want
 to download the latest successful artifacts you can use:


 https://builds.apache.org/view/G-L/view/Hadoop/job/${BUILD_NAME}/lastSuccessfulBuild/artifact/trunk/${BUILD_ARTIFACT}

 eg:

 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Common-trunk/lastSuccessfulBuild/artifact/trunk/hadoop-common-project/hadoop-common/target/hadoop-common-0.24.0-SNAPSHOT.tar.gz

 Here are the links for trunk and 0.23 builds:

 Common:
 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Common-trunk/

 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Common-0.23-Build/

 HDFS (still running, should be working by EOD):

 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk/
 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-0.23-Build/

 MapReduce:
 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Mapreduce-trunk/

 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Mapreduce-0.23-Build/


 thanks
 mahadev







-- 
-Giri


[jira] [Created] (HADOOP-7517) hadoop common build fails creating docs

2011-08-05 Thread Giridharan Kesavan (JIRA)
hadoop common build fails creating docs
---

 Key: HADOOP-7517
 URL: https://issues.apache.org/jira/browse/HADOOP-7517
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Giridharan Kesavan


post hadoop-6671 merge 
executing the following command fails on creating docs 
$MAVEN_HOME/bin/mvn clean verify checkstyle:checkstyle findbugs:findbugs 
-DskipTests -Pbintar -Psrc -Pnative -Pdocs

{noformat}
Main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-trunk-maven/trunk/hadoop-common/target/docs-src
 [copy] Copying 33 files to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-trunk-maven/trunk/hadoop-common/target/docs-src
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:33.807s
[INFO] Finished at: Fri Aug 05 08:50:43 UTC 2011
[INFO] Final Memory: 35M/462M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
hadoop-common: An Ant BuildException has occured: Execute failed: 
java.io.IOException: Cannot run program 
/home/hudson/tools/forrest/latest/bin/forrest (in directory 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-trunk-maven/trunk/hadoop-common/target/docs-src):
 java.io.IOException: error=2, No such file or directory - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[INFO] Scanning for projects...
{noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: yahoo.net build machines

2011-07-31 Thread Giridharan Kesavan
Yahoo Ops is working on getting the machines online.

Thanks
Giri

On Sat, Jul 30, 2011 at 3:08 AM, Laxman lakshman...@huawei.com wrote:

 Hi all, any update on this?

 -Original Message-
 From: Thomas Graves [mailto:tgra...@yahoo-inc.com]
 Sent: Thursday, July 28, 2011 7:56 PM
 To: common-dev@hadoop.apache.org; Todd Lipcon
 Subject: Re: yahoo.net build machines

 Its being looked into.

 Tom


 On 7/28/11 12:14 AM, Todd Lipcon t...@cloudera.com wrote:

  Hi all,
 
  Looks like the Hudson build slaves in yahoo.net have gone down as of
 some
  time today. https://builds.apache.org/computer/
 
  Is someone working on getting these back online?
  -Todd




Re: [VOTE] Release 0.20.204.0-rc0

2011-07-28 Thread Giridharan Kesavan
Myself and Eric Yang are looking into this.
-Giri

On 7/28/11 12:04 PM, Allen Wittenauer awittena...@linkedin.com wrote:

 
 On Jul 25, 2011, at 7:05 PM, Owen O'Malley wrote:
 
 I've created a release candidate for 0.20.204.0 that I would like to release.
 
 It is available at: http://people.apache.org/~omalley/hadoop-0.20.204.0-rc0/
 
 0.20.204.0 has many fixes including disk fail in place and the new rpm and
 deb packages. Fail in place allows the DataNode and TaskTracker to continue
 after a hard drive fails.
 
 
 Is it still failing to build according to Jenkins?
 



[jira] [Created] (HADOOP-7400) HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set

2011-06-16 Thread Giridharan Kesavan (JIRA)
HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set 
---

 Key: HADOOP-7400
 URL: https://issues.apache.org/jira/browse/HADOOP-7400
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.206.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set a dir 
other than build dir

test-junit:
 [copy] Copying 1 file to 
/home/y/var/builds/thread2/workspace/Cloud-Hadoop-0.20.1xx-Secondary/src/contrib/hdfsproxy/src/test/resources/proxy-config
[junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
[junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7400) HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set

2011-06-16 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-7400.


   Resolution: Fixed
Fix Version/s: 0.20.206.0

 HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set 
 ---

 Key: HADOOP-7400
 URL: https://issues.apache.org/jira/browse/HADOOP-7400
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.206.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 0.20.206.0

 Attachments: HADOOP-7400.patch, HADOOP-7400.patch


 HdfsProxyTests fails when the -Dtest.build.dir and -Dbuild.test is set a dir 
 other than build dir
 test-junit:
  [copy] Copying 1 file to 
 /home/y/var/builds/thread2/workspace/Cloud-Hadoop-0.20.1xx-Secondary/src/contrib/hdfsproxy/src/test/resources/proxy-config
 [junit] Running org.apache.hadoop.hdfsproxy.TestHdfsProxy
 [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
 [junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Resolved: (HADOOP-7152) patch command not found on Hudson slave machines

2011-02-25 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-7152.


Resolution: Fixed

 patch command not found on Hudson slave machines
 --

 Key: HADOOP-7152
 URL: https://issues.apache.org/jira/browse/HADOOP-7152
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ranjit Mathew
Assignee: Giridharan Kesavan

 As seen in HDFS-1418, at least one of the Hudson slave machines does not have 
 the patch command installed. The error is:
   [exec] 
 /grid/0/hudson/hudson-slave/workspace/PreCommit-HDFS-Build/trunk/src/test/bin/test-patch.sh:
  line 275: /usr/bin/patch: No such file or directory
 From https://hudson.apache.org/hudson/job/PreCommit-HDFS-Build/207//console

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (HADOOP-7007) update the hudson-test-patch target to work with the latest test-patch script.

2010-10-25 Thread Giridharan Kesavan (JIRA)
update the hudson-test-patch target to work with the latest test-patch script.
--

 Key: HADOOP-7007
 URL: https://issues.apache.org/jira/browse/HADOOP-7007
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Giridharan Kesavan


The hudson-test-patch target has to be updated to work with the current 
test-patch.sh script. Since the callback login in the test-patch.sh is removed. 
by hadoop-7005

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-7007) update the hudson-test-patch target to work with the latest test-patch script.

2010-10-25 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-7007.


   Resolution: Fixed
Fix Version/s: 0.22.0

Patch committed.

 update the hudson-test-patch target to work with the latest test-patch script.
 --

 Key: HADOOP-7007
 URL: https://issues.apache.org/jira/browse/HADOOP-7007
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 0.22.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 0.22.0

 Attachments: HADOOP-7007-build.xml.patch, 
 HADOOP-7007-test-patch.patch, HADOOP-7007.patch


 The hudson-test-patch target has to be updated to work with the current 
 test-patch.sh script. Since the callback login in the test-patch.sh is 
 removed. by hadoop-7005

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-7003) Fix hadoop patch testing using jira_cli tool

2010-10-20 Thread Giridharan Kesavan (JIRA)
Fix hadoop patch testing using jira_cli tool


 Key: HADOOP-7003
 URL: https://issues.apache.org/jira/browse/HADOOP-7003
 Project: Hadoop Common
  Issue Type: New Feature
  Components: build
Reporter: Giridharan Kesavan




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6705) jiracli fails to upload test-patch comments to jira

2010-04-16 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-6705.


Fix Version/s: 0.22.0
   Resolution: Fixed

 jiracli fails to upload test-patch comments to jira
 ---

 Key: HADOOP-6705
 URL: https://issues.apache.org/jira/browse/HADOOP-6705
 Project: Hadoop Common
  Issue Type: Test
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 0.22.0

 Attachments: HADOOP-6705.PATCH


  [exec] 
 ==
  [exec] Adding comment to Jira.
  [exec] 
 ==
  [exec] 
 ==
  [exec] 
  [exec] 
  [exec] Failed to connect to: 
 http://issues.apache.org/jira/rpc/soap/jirasoapservice-v2?wsdl
  [exec] Failed to connect to: 
 http://issues.apache.org/jira/rpc/soap/jirasoapservice-v2?wsdl
  [exec] Failed to connect to: 
 http://issues.apache.org/jira/rpc/soap/jirasoapservice-v2?wsdl
  [exec]   % Total% Received % Xferd  Average Speed   TimeTime 
 Time  Current

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (HADOOP-6697) script to that would let us checkout code from different repos

2010-04-09 Thread Giridharan Kesavan (JIRA)
script to that would let us checkout code from different repos
--

 Key: HADOOP-6697
 URL: https://issues.apache.org/jira/browse/HADOOP-6697
 Project: Hadoop Common
  Issue Type: Test
  Components: build
Reporter: Giridharan Kesavan


To write a shell script that would let us checkout code from two different 
repository , where we cant use svn:externals.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6671) To use maven for hadoop common builds

2010-04-01 Thread Giridharan Kesavan (JIRA)
To use maven for hadoop common builds
-

 Key: HADOOP-6671
 URL: https://issues.apache.org/jira/browse/HADOOP-6671
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 0.22.0
Reporter: Giridharan Kesavan


We are now able to publish hadoop artifacts to the maven repo successfully [ 
Hadoop-6382]
Drawbacks with the current approach:
* Use ivy for dependency management with ivy.xml
* Use maven-ant-task for artifact publishing to the maven repository
* pom files are not generated dynamically 

To address this I propose we use maven to build hadoop-common, which would help 
us to manage dependencies, publish artifacts and have one single xml file(POM) 
for dependency management and artifact publishing.

I would like to have a branch created to work on mavenizing  hadoop common.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[VOTE] HADOOP-6671 - To use maven for hadoop common build

2010-04-01 Thread Giridharan Kesavan

I  would like call for a vote to created a development branch of common
trunk to work on mavenizing hadoop common.

-Giri


[jira] Resolved: (HADOOP-5792) to resolve jsp-2.1 jars through IVY

2010-03-04 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-5792.


Resolution: Invalid

this jira is not valid anymore

 to resolve jsp-2.1 jars through IVY
 ---

 Key: HADOOP-5792
 URL: https://issues.apache.org/jira/browse/HADOOP-5792
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 0.21.0

 Attachments: Hadoop-5792.patch, Hadoop-5792.patch, Hadoop-5792.patch




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Deploying source/javadoc jars alongside artifacts?

2010-01-19 Thread Giridharan Kesavan

At the moment I'm workin on publishing the signed binary artifacts to the 
stating repo.
There is no discussion yet, on publishing javadoc nor src artifacts to the 
Apache Repo.

-Giri

On 20/01/10 2:33 AM, Drew Farris drew.far...@gmail.com wrote:

Hi All,

Is anyone working towards deploying source and javadoc jars for
hadoop-core alongside the binary snapshots currently deployed to
repository.apache.org? Is there any interest in a patch to provide
this capability?

Drew



Publishing hadoop artifacts - Apache Nexus Repo

2009-11-10 Thread Giridharan Kesavan

Hadoop-Common-trunk-Commit and Hadoop-Hdfs-trunk-Commit jobs on hudson is 
configured to publish core, core-test , hdfs and hdfs-test jars resp. to the 
apache nexus snapshot repository.

This means hdfs will always be build with the latest published common jars 
available in the apache nexus snapshot repo.

Thanks,
Giri


[jira] Created: (HADOOP-6362) parameterize mvn-deploy to publish artifacts to snapshots and staging

2009-11-03 Thread Giridharan Kesavan (JIRA)
parameterize mvn-deploy  to publish artifacts to snapshots and staging 
---

 Key: HADOOP-6362
 URL: https://issues.apache.org/jira/browse/HADOOP-6362
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.22.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan


Should be able to override the default snapshot repository through ant command 
line by passing -Drepo=staging 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



RE: how to apply patch to hadoop

2009-09-03 Thread Giridharan Kesavan
It's same us applying patch in linux; 
If you want to rollback, you can do svn revert 

tnx!
-G

From: Starry SHI [starr...@gmail.com]
Sent: Friday, September 04, 2009 7:41 AM
To: core-...@hadoop.apache.org
Subject: how to apply patch to hadoop

Hi all.

I am new to hadoop and I would like to ask how to apply patches to hadoop?
Is applying patches the same as diff and patch in Linux?

If not, could somebody tell me how to apply the patches to hadoop? I am not
very clear on how to do it.

Moreover, if I apply a patch, is it possible to roll back to the previous
version? How to do that?

Thank you very much for you attentions and helps!

Best regards,
Starry

[jira] Resolved: (HADOOP-6077) test-patch target does not validate that forrest docs are built correctly

2009-08-31 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HADOOP-6077.


Resolution: Duplicate

marking it as duplicate.

 test-patch target does not validate that forrest docs are built correctly
 -

 Key: HADOOP-6077
 URL: https://issues.apache.org/jira/browse/HADOOP-6077
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Hemanth Yamijala
Assignee: Giridharan Kesavan

 The test-patch target does not explicitly check if the new patch breaks 
 forrest documentation. It actually runs the 'tar' target while checking for 
 javac warnings, but does not seem to validate if the target ran successfully 
 or not.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



tunk commit build

2009-08-30 Thread Giridharan Kesavan

We have a hudson job which is configured to watch common svn-trunk to run 
commit builds.
Every commit to trunk triggers a build
http://hudson.zones.apache.org/hudson/view/Common/job/Hadoop-Common-trunk-Commit/

This also means that we don't have to wait until the nightly build to see the 
commit changes.

Thanks,
Giri




[jira] Created: (HADOOP-6195) checkstyle target fails common trunk build.

2009-08-14 Thread Giridharan Kesavan (JIRA)
checkstyle target fails common trunk build.
---

 Key: HADOOP-6195
 URL: https://issues.apache.org/jira/browse/HADOOP-6195
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Giridharan Kesavan




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6182) Adding Apache License Headers and reduce releaseaudit warnings to zero

2009-08-09 Thread Giridharan Kesavan (JIRA)
Adding Apache License Headers and reduce releaseaudit warnings to zero
--

 Key: HADOOP-6182
 URL: https://issues.apache.org/jira/browse/HADOOP-6182
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Giridharan Kesavan


As of now rats tool shows 111 RA warnings 

[rat:report] Summary
[rat:report] ---
[rat:report] Notes: 18
[rat:report] Binaries: 118
[rat:report] Archives: 33
[rat:report] Standards: 942
[rat:report] 
[rat:report] Apache Licensed: 820
[rat:report] Generated Documents: 11
[rat:report] 
[rat:report] JavaDocs are generated and so license header is optional
[rat:report] Generated files do not required license headers
[rat:report] 
[rat:report] 111 Unknown Licenses
[rat:report] 
[rat:report] ***

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



build failures on hudson zones

2009-08-05 Thread Giridharan Kesavan
Build on hudson.zones are failing as the zonestorage for hudson is full.
I 've sent an email to the ASF infra team about the space issues on hudson 
zones.

Once the issues is resolved I would restart hudson for builds.

Thanks,
Giri




RE: Developing cross-component patches post-split

2009-07-16 Thread Giridharan Kesavan

Based on the discussions we have the first version of the patch uploaded to 
jira HADOOP-5107
This patch can be used for publishing and resolving hadoop artifacts for a 
repository.


1) Publishing/resolving common/hdfs/mapred artifacts to/from the local 
filesystem.

ant ivy-publish-local would publish the jars locally to ${ivy.repo.dir} which 
defaults to ${user.home}/ivyrepo
ant -Dresolver=local would resolve artifacts from the local filesystem which 
resolves artifacts from ${user.home}/ivyrepo

2) Publishing artifacts to the people.apache.org 

ssh resolver is configured which publishes common/hdfs/mapred artifacts to my 
home folder /home/gkesavan/ivyrepo 

Publishing requires authentication whereas, resolving requires passing an 
argument -Dusername and value for it.

The reason I'm using my home folder is that I'm not sure if we can publish the 
ivy artifacts to 
http://people.apache.org/repository or http://people.apache.org/repo/ (used 
mostly for maven artifacts)

If someone can me tell about using people's repository I can recreate the patch 
to publish ivy artifacts to people server's standard repository.

Thanks,
Giri

 -Original Message-
 From: Scott Carey [mailto:sc...@richrelevance.com]
 Sent: Thursday, July 02, 2009 10:32 PM
 To: common-dev@hadoop.apache.org
 Subject: Re: Developing cross-component patches post-split
 
 
 On 7/1/09 11:58 PM, Nigel Daley nda...@yahoo-inc.com wrote:
 
 
 
  On Jul 1, 2009, at 10:16 PM, Todd Lipcon wrote:
 
  On Wed, Jul 1, 2009 at 10:10 PM, Raghu Angadi rang...@yahoo-
  inc.com wrote:
 
 
  -1 for committing the jar.
 
  Most of the various options proposed sound certainly better.
 
  Can build.xml be updated such that Ivy fetches recent (nightly)
  build?
 
  +1.  Using ant command line parameters for Ivy, the hdfs and
 mapreduce
  builds can depend on the latest Common build from one of:
  a) a local filesystem ivy repo/directory (ie. a developer build of
  Common that is published automatically to local fs ivy directory)
  b) a maven repo (ie. a stable published signed release of Common)
  c) a URL
 
 
 The standard approach to this problem is the above -- a local file
 system
 repository, with local developer build output, and a shared repository
 with
 build-system blessed content.
 A developer can choose which to use based on their needs.
 
 For ease of use, there is always a way to trigger the dependency chain
 for a
 full build.  Typically with Java this is a master ant script or a
 maven
 POM.  The build system must either know to build all at once with the
 proper
 dependency order, or versions are decoupled and dependency changes
 happen
 only when manually triggered (e.g. Hdfs at revision  uses common
 9000,
 and then a check-in pushes hdfs 1 to use a new common version).
 Checking in Jars is usually very frowned upon.  Rather, metadata is
 checked
 in -- the revision number and branch that can create the jar, and the
 jar
 can be fetched from a repository or built with that metadata.
 
 AFAICS those are the only two options -- tight coupling, or strict
 separation.  The latter means that changes to common aren't picked up
 by
 hdfs or mpareduce until the dependent version is incremented in the
 metadata
 (harder and more restrictive to devs), and the former means that all
 are
 essentially the same coupled version (more complicated on the build
 system
 side but easy for devs).
 Developers can span both worlds, but the build system has to pick only
 one.
 
 
  Option c can be a stable URL to that last successful Hudson build and
  is in fact what all the Hudson hdfs and mapreduce builds could be
  configured to use.  An example URL would be something like:
 
  http://hudson.zones.apache.org/hudson/job/Hadoop-Common-
 trunk/lastSuccessfulBu
  ild/artifact/
  ...
 
  Giri is creating a patch for this and will respond with more insight
  on how this might work.
 
  This seems slightly better than actually committing the jars.
  However, what
  should we do when the nightly build has failed hudson tests? We seem
  to
  sometimes go weeks at a time without a green build out of Hudson.
 
  Hudson creates a lastSuccessfulBuild link that should be used in
  most cases (see my example above).  If Common builds are failing we
  need to respond immediately.  Same for other sub-projects.  We've got
  to drop this culture that allows failing/flaky unit tests to persist.
 
 
  HDFS could have a build target that builds common jar from a
  specified
  source location for common.
 
 
  This is still my preffered option. Whether it does this with a
  javac task
  or with some kind of subant or even exec, I think having the
  source
  trees loosely tied together for developers is a must.
 
  -1.  If folks really want this, then let's revert the project split.
 :-o
 
  Nige
 
 



[jira] Created: (HADOOP-6137) to fix project specific test-patch requirements

2009-07-09 Thread Giridharan Kesavan (JIRA)
to fix project specific test-patch requirements 


 Key: HADOOP-6137
 URL: https://issues.apache.org/jira/browse/HADOOP-6137
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Giridharan Kesavan
Priority: Critical


only mapreduce project needs create-c++-configure target which needs to be 
executed as part to the test-core ant target.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.