[jira] [Resolved] (HDFS-9553) unit tests are leaving files undeletable by jenkins in target dir

2015-12-12 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HDFS-9553.
--
Resolution: Fixed

> unit tests are leaving files undeletable by jenkins in target dir
> -
>
> Key: HDFS-9553
> URL: https://issues.apache.org/jira/browse/HDFS-9553
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>        Assignee: Giridharan Kesavan
>Priority: Blocker
>
> Once again we have a 'stuck' jenkins slave because a unit test is leaving 
> files around that git clean can't remove:
> From https://builds.apache.org/job/PreCommit-HDFS-Build/13851/console:
> {code}
> stderr: warning: failed to remove 
> hadoop-hdfs-project/hadoop-hdfs/target/test/data/2
> {code}
> The last time this happened: INFRA-10785



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: builds failing on H9 with cannot access java.lang.Runnable

2014-10-03 Thread Giridharan Kesavan
all the slaves are getting re-booted give it some more time

-giri

On Fri, Oct 3, 2014 at 1:13 PM, Ted Yu yuzhih...@gmail.com wrote:

 Adding builds@

 On Fri, Oct 3, 2014 at 1:07 PM, Colin McCabe cmcc...@alumni.cmu.edu
 wrote:

  It looks like builds are failing on the H9 host with cannot access
  java.lang.Runnable
 
  Example from
 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8313/artifact/patchprocess/trunkJavacWarnings.txt
  :
 
  [INFO]
  
  [INFO] BUILD FAILURE
  [INFO]
  
  [INFO] Total time: 03:13 min
  [INFO] Finished at: 2014-10-03T18:04:35+00:00
  [INFO] Final Memory: 57M/839M
  [INFO]
  
  [ERROR] Failed to execute goal
  org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile
  (default-testCompile) on project hadoop-mapreduce-client-app:
  Compilation failure
  [ERROR]
 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java:[189,-1]
  cannot access java.lang.Runnable
  [ERROR] bad class file:
 java/lang/Runnable.class(java/lang:Runnable.class)
 
  I don't have shell access to this, does anyone know what's going on on
 H9?
 
  best,
  Colin
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Updates on migration to git

2014-08-23 Thread Giridharan Kesavan
​I can take a look at this on Monday. ​

-giri


On Sat, Aug 23, 2014 at 6:31 PM, Karthik Kambatla ka...@cloudera.com
wrote:

 Couple of things:

 1. Since no one expressed any reservations against doing this on Sunday or
 renaming trunk to master, I ll go ahead and confirm that. I think that
 serves us better in the long run.

 2. Arpit brought up the precommit builds - we should definitely fix them as
 soon as we can. I understand Giri maintains those builds, do we have anyone
 else who has access in case Giri is not reachable? Giri - please shout out
 if you can help us with this either on Sunday or Monday.

 Thanks
 Karthik




 On Fri, Aug 22, 2014 at 3:50 PM, Karthik Kambatla ka...@cloudera.com
 wrote:

  Also, does anyone know what we use for integration between JIRA and svn?
 I
  am assuming svn2jira.
 
 
  On Fri, Aug 22, 2014 at 3:48 PM, Karthik Kambatla ka...@cloudera.com
  wrote:
 
  Hi folks,
 
  For the SCM migration, feel free to follow
  https://issues.apache.org/jira/browse/INFRA-8195
 
  Most of this is planned to be handled this Sunday. As a result, the
  subversion repository would be read-only. If this is a major issue for
 you,
  please shout out.
 
  Daniel Gruno, the one helping us with the migration, was asking if we
 are
  open to renaming trunk to master to better conform to git lingo. I
 am
  tempted to say yes, but wanted to check.
 
  Would greatly appreciate any help with checking the git repo has
  everything.
 
  Thanks
  Karthik
 
 
 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins Build Slaves

2014-07-10 Thread Giridharan Kesavan
Chris,

Newer hosts are on ubuntu-14.04 and I'm not aware of any ln related change
there.
Please let me know if you need any help with debugging the hosts.

-giri


On Thu, Jul 10, 2014 at 9:53 AM, Chris Nauroth cnaur...@hortonworks.com
wrote:

 Thanks, Giri, for taking care of pkgconfig.

 It looks like most (all?) pre-commit builds have some new failing tests:

 https://builds.apache.org/job/PreCommit-HADOOP-Build/4247/testReport/

 On the symlink tests, is there any chance that the new hosts have a
 different version/different behavior for the ln command?

 The TestIPC failure is in a stress test that checks behavior after
 spamming a lot of connections at an RPC server.  Maybe the new hosts have
 something different in the TCP stack, such as TCP backlog?

 I likely won't get a chance to investigate any more today, but I wanted to
 raise the issue in case someone else gets an opportunity to look.

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Wed, Jul 9, 2014 at 10:33 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 I dont think so, let me fix that. Thanks Chris for pointing that out.


 -giri


 On Wed, Jul 9, 2014 at 9:50 AM, Chris Nauroth cnaur...@hortonworks.com
 wrote:

 Hi Giri,

 Is pkgconfig deployed on the new Jenkins slaves?  I noticed this build
 failed:

 https://builds.apache.org/job/PreCommit-HADOOP-Build/4237/

 Looking in the console output, it appears the HDFS native code failed to
 build due to missing pkgconfig.

  [exec] CMake Error at
 /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108
 (message):
  [exec]   Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Wed, Jul 9, 2014 at 7:08 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 Build jobs are now configured to run on the newer set of slaves.



 -giri


 On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan 
 gkesa...@hortonworks.com
  wrote:

  All
 
  Yahoo is in the process of retiring all the hadoop jenkins build
 slaves,
  *hadoop[1-9]* and

  replace them with a newer set of beefier hosts. These new machines are
  configured
  with *ubuntu-14.04*.

 
  Over the next couple of days I will be configuring the build jobs to
 run
  on these newly
  configured build slaves.  To automate the installation of tools and
 build
  libraries I have
  put together ansible scripts and here is the link to the toolchain
 repo.
 
 
  *https://github.com/apache/toolchain 
 https://github.com/apache/toolchain

  *
 
  During the transition, the old build slave will be accessible, and
  expected to be shutdown by 07/15.
 
  I will send out an update later this week when this transition is
  complete.
 
  *Mean while, I would like to request the project owners to
 remove/cleanup
  any stale *
  *jenkins job for their respective project and help with any builds
 issue
  to make this *
  *transition seamless. *
 
  Thanks
 
  -
  Giri
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or
 entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the
 reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.






-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins build fails

2014-07-09 Thread Giridharan Kesavan
I'm looking into this.

-giri


On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp
wrote:

 Filed https://issues.apache.org/jira/browse/HADOOP-10804
 Please correct me if I am wrong..

 Thanks,
 Akira

 (2014/07/09 11:24), Akira AJISAKA wrote:
  Hi Hadoop developers,
 
  Now Jenkins is failing with the below message.
  I'm thinking this is caused by the upgrade of Jenkins server.
  After the upgrade, the version of svn client was also upgraded,
  so the following errors occurred.
 
  It will be fixed by executing 'svn upgrade' before executing
  other svn commands. I'll file a JIRA and create a patch shortly.
 
  Regards,
  Akira
 
  ==
  ==
   Testing patch for HADOOP-10661.
  ==
  ==
 
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  Build step 'Execute shell' marked build as failure
  Archiving artifacts
  Description set: HADOOP-10661
  Recording test results
  Finished: FAILURE
 



-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins build fails

2014-07-09 Thread Giridharan Kesavan
I took care of the svn upgrade issue


-giri


On Wed, Jul 9, 2014 at 5:05 AM, Giridharan Kesavan gkesa...@hortonworks.com
 wrote:


 I'm looking into this.

 -giri


 On Tue, Jul 8, 2014 at 8:15 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp
 wrote:

 Filed https://issues.apache.org/jira/browse/HADOOP-10804
 Please correct me if I am wrong..

 Thanks,
 Akira

 (2014/07/09 11:24), Akira AJISAKA wrote:
  Hi Hadoop developers,
 
  Now Jenkins is failing with the below message.
  I'm thinking this is caused by the upgrade of Jenkins server.
  After the upgrade, the version of svn client was also upgraded,
  so the following errors occurred.
 
  It will be fixed by executing 'svn upgrade' before executing
  other svn commands. I'll file a JIRA and create a patch shortly.
 
  Regards,
  Akira
 
  ==
  ==
   Testing patch for HADOOP-10661.
  ==
  ==
 
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  svn: E155036: Please see the 'svn upgrade' command
  svn: E155036: The working copy at
  '/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk'
  is too old (format 10) to work with client version '1.8.8 (r1568071)'
  (expects format 31). You need to upgrade the working copy first.
 
  Build step 'Execute shell' marked build as failure
  Archiving artifacts
  Description set: HADOOP-10661
  Recording test results
  Finished: FAILURE
 




-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins Build Slaves

2014-07-09 Thread Giridharan Kesavan
Build jobs are now configured to run on the newer set of slaves.



-giri


On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan gkesa...@hortonworks.com
 wrote:

 All

 Yahoo is in the process of retiring all the hadoop jenkins build slaves,
 *hadoop[1-9]* and
 replace them with a newer set of beefier hosts. These new machines are
 configured
 with *ubuntu-14.04*.

 Over the next couple of days I will be configuring the build jobs to run
 on these newly
 configured build slaves.  To automate the installation of tools and build
 libraries I have
 put together ansible scripts and here is the link to the toolchain repo.


 *https://github.com/apache/toolchain https://github.com/apache/toolchain
 *

 During the transition, the old build slave will be accessible, and
 expected to be shutdown by 07/15.

 I will send out an update later this week when this transition is
 complete.

 *Mean while, I would like to request the project owners to remove/cleanup
 any stale *
 *jenkins job for their respective project and help with any builds issue
 to make this *
 *transition seamless. *

 Thanks

 -
 Giri


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Jenkins Build Slaves

2014-07-09 Thread Giridharan Kesavan
I dont think so, let me fix that. Thanks Chris for pointing that out.


-giri


On Wed, Jul 9, 2014 at 9:50 AM, Chris Nauroth cnaur...@hortonworks.com
wrote:

 Hi Giri,

 Is pkgconfig deployed on the new Jenkins slaves?  I noticed this build
 failed:

 https://builds.apache.org/job/PreCommit-HADOOP-Build/4237/

 Looking in the console output, it appears the HDFS native code failed to
 build due to missing pkgconfig.

  [exec] CMake Error at
 /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108
 (message):
  [exec]   Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)

 Chris Nauroth
 Hortonworks
 http://hortonworks.com/



 On Wed, Jul 9, 2014 at 7:08 AM, Giridharan Kesavan 
 gkesa...@hortonworks.com wrote:

 Build jobs are now configured to run on the newer set of slaves.



 -giri


 On Mon, Jul 7, 2014 at 4:12 PM, Giridharan Kesavan 
 gkesa...@hortonworks.com
  wrote:

  All
 
  Yahoo is in the process of retiring all the hadoop jenkins build slaves,
  *hadoop[1-9]* and

  replace them with a newer set of beefier hosts. These new machines are
  configured
  with *ubuntu-14.04*.

 
  Over the next couple of days I will be configuring the build jobs to run
  on these newly
  configured build slaves.  To automate the installation of tools and
 build
  libraries I have
  put together ansible scripts and here is the link to the toolchain repo.
 
 
  *https://github.com/apache/toolchain 
 https://github.com/apache/toolchain

  *
 
  During the transition, the old build slave will be accessible, and
  expected to be shutdown by 07/15.
 
  I will send out an update later this week when this transition is
  complete.
 
  *Mean while, I would like to request the project owners to
 remove/cleanup
  any stale *
  *jenkins job for their respective project and help with any builds issue
  to make this *
  *transition seamless. *
 
  Thanks
 
  -
  Giri
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity
 to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified
 that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender
 immediately
 and delete it from your system. Thank You.




-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Jenkins Build Slaves

2014-07-07 Thread Giridharan Kesavan
All

Yahoo is in the process of retiring all the hadoop jenkins build slaves,
*hadoop[1-9]* and
replace them with a newer set of beefier hosts. These new machines are
configured
with *ubuntu-14.04*.

Over the next couple of days I will be configuring the build jobs to run on
these newly
configured build slaves.  To automate the installation of tools and build
libraries I have
put together ansible scripts and here is the link to the toolchain repo.


*https://github.com/apache/toolchain https://github.com/apache/toolchain*

During the transition, the old build slave will be accessible, and
expected to be shutdown by 07/15.

I will send out an update later this week when this transition is complete.

*Mean while, I would like to request the project owners to remove/cleanup
any stale *
*jenkins job for their respective project and help with any builds issue to
make this *
*transition seamless. *

Thanks

-
Giri

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: [VOTE] Change by-laws on release votes: 5 days instead of 7

2014-06-25 Thread Giridharan Kesavan
+1

-giri


On Wed, Jun 25, 2014 at 12:02 PM, Arpit Agarwal aagar...@hortonworks.com
wrote:

 +1 Arpit


 On Tue, Jun 24, 2014 at 1:53 AM, Arun C Murthy a...@hortonworks.com
 wrote:

  Folks,
 
   As discussed, I'd like to call a vote on changing our by-laws to change
  release votes from 7 days to 5.
 
   I've attached the change to by-laws I'm proposing.
 
   Please vote, the vote will the usual period of 7 days.
 
  thanks,
  Arun
 
  
 
  [main]$ svn diff
  Index: author/src/documentation/content/xdocs/bylaws.xml
  ===
  --- author/src/documentation/content/xdocs/bylaws.xml   (revision
 1605015)
  +++ author/src/documentation/content/xdocs/bylaws.xml   (working copy)
  @@ -344,7 +344,16 @@
   pVotes are open for a period of 7 days to allow all active
   voters time to consider the vote. Votes relating to code
   changes are not subject to a strict timetable but should be
  -made as timely as possible./p/li
  +made as timely as possible./p
  +
  + ul
  + li strongProduct Release - Vote Timeframe/strong
  +   pRelease votes, alone, run for a period of 5 days. All
 other
  + votes are subject to the above timeframe of 7 days./p
  + /li
  +   /ul
  +   /li
  +
  /ul
  /section
   /body
  --
  CONFIDENTIALITY NOTICE
  NOTICE: This message is intended for the use of the individual or entity
 to
  which it is addressed and may contain information that is confidential,
  privileged and exempt from disclosure under applicable law. If the reader
  of this message is not the intended recipient, you are hereby notified
 that
  any printing, copying, dissemination, distribution, disclosure or
  forwarding of this communication is strictly prohibited. If you have
  received this communication in error, please contact the sender
 immediately
  and delete it from your system. Thank You.
 

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845

2013-08-10 Thread Giridharan Kesavan
build slaves hadoop1-hadoop9 now has libprotoc 2.5.0



-Giri


On Fri, Aug 9, 2013 at 10:56 PM, Giridharan Kesavan 
gkesa...@hortonworks.com wrote:

 Alejandro,

 I'm upgrading protobuf on slaves hadoop1-hadoop9.

 -Giri


 On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur t...@cloudera.comwrote:

 pinging again, I need help from somebody with sudo access to the hadoop
 jenkins boxes to do this or to get sudo access for a couple of hours to
 set
 up myself.

 Please!!!

 thx


 On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur t...@cloudera.com
 wrote:

  To move forward with this we need protoc 2.5.0 in the apache hadoop
  jenkins boxes.
 
  Who can help with this? I assume somebody at Y!, right?
 
  Thx
 
 
  On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark ecl...@apache.org
 wrote:
 
  In HBase land we've pretty well discovered that we'll need to have the
  same version of protobuf that the HDFS/Yarn/MR servers are running.
  That is to say there are issues with ever having 2.4.x and 2.5.x on
  the same class path.
 
  Upgrading to 2.5.x would be great, as it brings some new classes we
  could use.  With that said HBase is getting pretty close to a rather
  large release (0.96.0 aka The Singularity) so getting this in sooner
  rather than later would be great.  If we could get this into 2.1.0 it
  would be great as that would allow us to have a pretty easy story to
  users with regards to protobuf version.
 
  On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee kih...@yahoo-inc.com
 wrote:
   Sorry to hijack the thread but, I also wanted to mention Avro. See
  HADOOP-9672.
   The version we are using has memory leak and inefficiency issues.
 We've
  seen users running into it.
  
   Kihwal
  
  
   
From: Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
   To: common-...@hadoop.apache.org common-...@hadoop.apache.org
   Cc: hdfs-dev@hadoop.apache.org hdfs-dev@hadoop.apache.org; 
  yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org; 
  mapreduce-...@hadoop.apache.org mapreduce-...@hadoop.apache.org
   Sent: Thursday, August 8, 2013 1:59 AM
   Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release,
  HADOOP-9845
  
  
   Hi,
  
   About Hadoop, Harsh is dealing with this problem in HADOOP-9346.
   For more detail, please see the JIRA ticket:
   https://issues.apache.org/jira/browse/HADOOP-9346
  
   - Tsuyoshi
  
   On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur 
 t...@cloudera.com
  wrote:
   I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
  
   As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits
 to
   justify the upgrade.
  
   Doing the upgrade now, with the first beta, will make things easier
 for
   downstream projects (like HBase) using protobuf and adopting Hadoop
 2.
  If
   we do the upgrade later, downstream projects will have to support 2
   different versions and they my get in nasty waters due to classpath
  issues.
  
   I've locally tested the patch in a pseudo deployment of 2.1.0-beta
  branch
   and it works fine (something is broken in trunk in the RPC layer
  YARN-885).
  
   Now, to do this it will require a few things:
  
   * Make sure protobuf 2.5.0 is available in the jenkins box
   * A follow up email to dev@ aliases indicating developers should
  install
   locally protobuf 2.5.0
  
   Thanks.
  
   --
   Alejandro
 
 
 
 
  --
  Alejandro
 



 --
 Alejandro





Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release, HADOOP-9845

2013-08-09 Thread Giridharan Kesavan
Alejandro,

I'm upgrading protobuf on slaves hadoop1-hadoop9.

-Giri


On Fri, Aug 9, 2013 at 1:15 PM, Alejandro Abdelnur t...@cloudera.comwrote:

 pinging again, I need help from somebody with sudo access to the hadoop
 jenkins boxes to do this or to get sudo access for a couple of hours to set
 up myself.

 Please!!!

 thx


 On Thu, Aug 8, 2013 at 2:29 PM, Alejandro Abdelnur t...@cloudera.com
 wrote:

  To move forward with this we need protoc 2.5.0 in the apache hadoop
  jenkins boxes.
 
  Who can help with this? I assume somebody at Y!, right?
 
  Thx
 
 
  On Thu, Aug 8, 2013 at 2:24 PM, Elliott Clark ecl...@apache.org wrote:
 
  In HBase land we've pretty well discovered that we'll need to have the
  same version of protobuf that the HDFS/Yarn/MR servers are running.
  That is to say there are issues with ever having 2.4.x and 2.5.x on
  the same class path.
 
  Upgrading to 2.5.x would be great, as it brings some new classes we
  could use.  With that said HBase is getting pretty close to a rather
  large release (0.96.0 aka The Singularity) so getting this in sooner
  rather than later would be great.  If we could get this into 2.1.0 it
  would be great as that would allow us to have a pretty easy story to
  users with regards to protobuf version.
 
  On Thu, Aug 8, 2013 at 8:18 AM, Kihwal Lee kih...@yahoo-inc.com
 wrote:
   Sorry to hijack the thread but, I also wanted to mention Avro. See
  HADOOP-9672.
   The version we are using has memory leak and inefficiency issues.
 We've
  seen users running into it.
  
   Kihwal
  
  
   
From: Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
   To: common-...@hadoop.apache.org common-...@hadoop.apache.org
   Cc: hdfs-dev@hadoop.apache.org hdfs-dev@hadoop.apache.org; 
  yarn-...@hadoop.apache.org yarn-...@hadoop.apache.org; 
  mapreduce-...@hadoop.apache.org mapreduce-...@hadoop.apache.org
   Sent: Thursday, August 8, 2013 1:59 AM
   Subject: Re: Upgrade to protobuf 2.5.0 for the 2.1.0 release,
  HADOOP-9845
  
  
   Hi,
  
   About Hadoop, Harsh is dealing with this problem in HADOOP-9346.
   For more detail, please see the JIRA ticket:
   https://issues.apache.org/jira/browse/HADOOP-9346
  
   - Tsuyoshi
  
   On Thu, Aug 8, 2013 at 1:49 AM, Alejandro Abdelnur t...@cloudera.com
 
  wrote:
   I' like to upgrade to protobuf 2.5.0 for the 2.1.0 release.
  
   As mentioned in HADOOP-9845, Protobuf 2.5 has significant benefits to
   justify the upgrade.
  
   Doing the upgrade now, with the first beta, will make things easier
 for
   downstream projects (like HBase) using protobuf and adopting Hadoop
 2.
  If
   we do the upgrade later, downstream projects will have to support 2
   different versions and they my get in nasty waters due to classpath
  issues.
  
   I've locally tested the patch in a pseudo deployment of 2.1.0-beta
  branch
   and it works fine (something is broken in trunk in the RPC layer
  YARN-885).
  
   Now, to do this it will require a few things:
  
   * Make sure protobuf 2.5.0 is available in the jenkins box
   * A follow up email to dev@ aliases indicating developers should
  install
   locally protobuf 2.5.0
  
   Thanks.
  
   --
   Alejandro
 
 
 
 
  --
  Alejandro
 



 --
 Alejandro



[jira] [Resolved] (HDFS-4005) Missing jersey jars as dependency causes hive tests to fail

2012-10-03 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HDFS-4005.
--

Resolution: Duplicate

duplicate of hadoop-8880

 Missing jersey jars as dependency causes hive tests to fail
 ---

 Key: HDFS-4005
 URL: https://issues.apache.org/jira/browse/HDFS-4005
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 Jersey dependency need to be added ivy/hadoop-core-pom-template.xml.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Contribution to hadoop

2012-02-22 Thread Giridharan Kesavan
Hope this helps
http://wiki.apache.org/hadoop/HowToContribute
-Giri



On Wed, Feb 22, 2012 at 11:27 PM,  shreya@cognizant.com wrote:
 Hi



 I am interested in contributing to the Hadoop code.

 Please let me know what is the process and how can it be done.



 Thanks and Regards,

 Shreya Pal




 This e-mail and any files transmitted with it are for the sole use of the 
 intended recipient(s) and may contain confidential and privileged information.
 If you are not the intended recipient, please contact the sender by reply 
 e-mail and destroy all copies of the original message.
 Any unauthorized review, use, disclosure, dissemination, forwarding, printing 
 or copying of this email or any action taken in reliance on this e-mail is 
 strictly prohibited and may be unlawful.


Re: Fwd: Hadoop-Hdfs-trunk-Commit - Build # 1449 - Still Failing

2011-12-07 Thread giridharan kesavan
maven settings.xml which has the authentication details is missing on 
the jenkins node.

I just copied it.

-Giri

On 12/7/11 2:36 PM, Todd Lipcon wrote:

Anyone understand what's up with the builds? Looks like some issue
publishing the mvn snapshot?


-- Forwarded message --
From: Apache Jenkins Serverjenk...@builds.apache.org
Date: Wed, Dec 7, 2011 at 2:28 PM
Subject: Hadoop-Hdfs-trunk-Commit - Build # 1449 - Still Failing
To: hdfs-dev@hadoop.apache.org


See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1449/

###
## LAST 60 LINES OF THE CONSOLE
###
[...truncated 8987 lines...]
2668 KB
2672 KB
2676 KB
2680 KB
2684 KB
2688 KB
2692 KB
2696 KB
2700 KB
2704 KB
2708 KB
2712 KB
2716 KB
2720 KB
2724 KB
2728 KB
2732 KB
2736 KB
2740 KB
2744 KB
2748 KB
2752 KB
2756 KB
2760 KB
2764 KB
2768 KB
2772 KB
2773 KB

Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.24.0-SNAPSHOT/hadoop-hdfs-0.24.0-20111207.222825-.pom
4 KB
8 KB
12 KB
16 KB
18 KB

[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop HDFS  FAILURE [8.646s]
[INFO] Apache Hadoop HDFS Project  SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 8.878s
[INFO] Finished at: Wed Dec 07 22:28:26 UTC 2011
[INFO] Final Memory: 23M/460M
[INFO] 
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-deploy-plugin:2.5:deploy
(default-deploy) on project hadoop-hdfs: Failed to deploy artifacts:
Could not transfer artifact
org.apache.hadoop:hadoop-hdfs:jar:0.24.0-20111207.222825- from/to
apache.snapshots.https
(https://repository.apache.org/content/repositories/snapshots): Failed
to transfer file:
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-hdfs/0.24.0-SNAPSHOT/hadoop-hdfs-0.24.0-20111207.222825-.jar.
Return code is: 401 -  [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with
the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Updating HADOOP-7887
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any)
##
No tests ran.





--
-Giri



Re: 0.23 trunk tars, we'll we publishing 1 tar per component or a single tar? What about source tar?

2011-10-12 Thread giridharan kesavan

+1 for option 4


On 10/12/11 9:50 AM, Eric Yang wrote:

Option #4 is the most practical use case for making a release.  For bleeding 
edge developers, they would prefer to mix and match different version of hdfs 
and mapreduce.  Hence, it may be good to release the single tarball for 
release, but continue to support component tarballs for developers and rpm/deb 
packaging.  In case, someone wants to run hdfs + hbase, but not mapreduce for 
specialized application.  Component separation tarball should continue to work 
for rpm/deb packaging.

regards,
Eric

On Oct 12, 2011, at 9:30 AM, Prashant Sharma wrote:


I support the idea of having 4 as additional option.

On Wed, Oct 12, 2011 at 9:37 PM, Alejandro Abdelnurt...@cloudera.com  wrote:

Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.

With HADOOP-7642 the stitching happens as part of the build.

The build currently produces the following tars:

1* common TAR
2* hdfs (partial) TAR
3* mapreduce (partial) TAR
4* hadoop (full, the stitched one) TAR

#1 on its own does not run anything, #2 and #3 on their own don't run. #4
runs hdfs  mapreduce.

Questions:

Q1. Does it make sense to publish #1, #2  #3? Or #4 is sufficient and you
start the services you want (i.e. Hbase would just use HDFS)?

Q2. And what about a source TAR, does it make sense to have source TAR per
component or a single TAR for the whole?


For simplicity (for the build system and for users) I'd prefer a single
binary TAR and a single source TAR.

Thanks.

Alejandro




--

Prashant Sharma
Pramati Technologies
Begumpet, Hyderabad.





--
-Giri



[jira] [Created] (HDFS-2278) hdfs unit test failures on trunk

2011-08-22 Thread Giridharan Kesavan (JIRA)
hdfs unit test failures on trunk


 Key: HDFS-2278
 URL: https://issues.apache.org/jira/browse/HDFS-2278
 Project: Hadoop HDFS
  Issue Type: Test
Affects Versions: 0.23.0
 Environment: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/lastCompletedBuild/testReport/
Reporter: Giridharan Kesavan


Following unit tests fail on hdfs trunk
org.apache.hadoop.hdfs.TestDfsOverAvroRpc.testWorkingDirectory
org.apache.hadoop.hdfs.server.blockmanagement.TestHost2NodesMap.testGetDatanodeByHost
  
org.apache.hadoop.hdfs.server.blockmanagement.TestHost2NodesMap.testGetDatanodeByName
 
org.apache.hadoop.hdfs.server.datanode.TestReplicasMap.testGet 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer.testStored
 
Console output: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/lastCompletedBuild/consoleFull

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-2270) AOP system test framework is broken

2011-08-17 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HDFS-2270.
--

Resolution: Duplicate

Resolving this as a duplicate of 
https://issues.apache.org/jira/browse/HDFS-2261

 AOP system test framework is broken
 ---

 Key: HDFS-2270
 URL: https://issues.apache.org/jira/browse/HDFS-2270
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Tsz Wo (Nicholas), SZE
 Fix For: 0.23.0


 {noformat}
 $ant test-system
 ...
 -compile-fault-inject:
  [echo] Start weaving aspects in place
  [iajc] .../hdfs/src/java/org/apache/hadoop/hdfs/HftpFileSystem.java:269 
 [error] The method encodeQueryValue(String) is undefined for the type 
 ServletUtil
  [iajc] ServletUtil.encodeQueryValue(ugi.getShortUserName()));
  [iajc] ^^^
  [iajc] .../hdfs/src/java/org/apache/hadoop/hdfs/HftpFileSystem.java:272 
 [error] The method encodeQueryValue(String) is undefined for the type 
 ServletUtil
  [iajc] ugiParamenter.append(ServletUtil.encodeQueryValue(g));
  [iajc]  ^
  [iajc] .../hdfs/src/java/org/apache/hadoop/hdfs/HftpFileSystem.java:320 
 [error] The method encodePath(String) is undefined for the type ServletUtil
  [iajc] String path = /data + 
 ServletUtil.encodePath(f.toUri().getPath());
  [iajc] ^
 ...
  [iajc] 18 errors, 4 warnings
 BUILD FAILED
 {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Failing trunk builds for HDFS.

2011-08-16 Thread Giridharan Kesavan
      [copy] Copying 1 file to /home/todd/git/hadoop-common/hdfs/conf
      [copy] Copying
  /home/todd/git/hadoop-common/hdfs/conf/hdfs-site.xml.template to
  /home/todd/git/hadoop-common/hdfs/conf/hdfs-site.xml
     [mkdir] Created dir: /home/todd/git/hadoop-common/hdfs/build/test/conf
      [copy] Copying 1 file to
 /home/todd/git/hadoop-common/hdfs/build/test/conf
      [copy] Copying
  /home/todd/git/hadoop-common/hdfs/conf/hdfs-site.xml.template to
  /home/todd/git/hadoop-common/hdfs/build/test/conf/hdfs-site.xml
      [copy] Copying 2 files to
 /home/todd/git/hadoop-common/hdfs/build/test/conf
      [copy] Copying
  /home/todd/git/hadoop-common/hdfs/conf/hadoop-metrics2.properties to
 
 /home/todd/git/hadoop-common/hdfs/build/test/conf/hadoop-metrics2.properties
      [copy] Copying
  /home/todd/git/hadoop-common/hdfs/conf/log4j.properties to
  /home/todd/git/hadoop-common/hdfs/build/test/conf/log4j.properties
 
  check-libhdfs-makefile:
 
  create-libhdfs-makefile:
 
  compile-c++-libhdfs:
 
  clover.setup:
 
  clover.info:
      [echo]
      [echo]      Clover not found. Code coverage reports disabled.
      [echo]
 
  clover:
 
  compile-hdfs-classes:
     [javac] /home/todd/git/hadoop-common/hdfs/build.xml:370: warning:
  'includeantruntime' was not set, defaulting to
  build.sysclasspath=last; set to false for repeatable builds
     [javac] Compiling 282 source files to
  /home/todd/git/hadoop-common/hdfs/build/classes
     [javac]
 /home/todd/git/hadoop-common/hdfs/src/java/org/apache/hadoop/fs/Hdfs.java:33:
  package org.apache.hadoop.conf does not exist
     [javac] import org.apache.hadoop.conf.Configuration;
 
  ... [lots more errors where o.a.h.* from common can't be found ...
 
  and yet:
 
  todd@todd-w510:~/git/hadoop-common/hdfs$ find . -name \*common\*23\*jar
  ./build/ivy/lib/hadoop-hdfs/common/hadoop-common-0.23.0-SNAPSHOT.jar
  ./build/ivy/lib/hadoop-hdfs/test/hadoop-common-0.23.0-SNAPSHOT-tests.jar
 
  and then I run the exact build command again, I see:
 
  ivy-resolve-common:
  [ivy:resolve] downloading
 
 https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110815.225725-267.jar
  ...
  [ivy:resolve]
 ...
  [ivy:resolve]
 .
  (1667kB)
  [ivy:resolve] .. (0kB)
  [ivy:resolve]   [SUCCESSFUL ]
  org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar
  (2661ms)
 
  and it builds fine.
 
 
  On Mon, Aug 15, 2011 at 4:08 PM, Giridharan Kesavan
  gkesa...@hortonworks.com wrote:
  On Mon, Aug 15, 2011 at 3:52 PM, Eli Collins e...@cloudera.com wrote:
  On Mon, Aug 15, 2011 at 3:45 PM, Giridharan Kesavan
  gkesa...@hortonworks.com wrote:
  Hi Eli,
 
  your are right Im talking about the apache jenkins hdfs build
 failures;
 
  Im pretty sure hdfs is picking the latest hadoop common jars. I
  verified this with the apache repo as well.
 
  How are you building? The method that its claiming doesn't exist
  definitely does.
 
  target doesn't exist in the build.xml; its part of the fault injection
  framework and is imported from trunk/hdfs/src/test/aop/build/aop.xml
 
  you can see this import in the build.xml file
   import file=${test.src.dir}/aop/build/aop.xml/
 
 
  The following works on trunk so I think it's  an issue with how
  Jenkins is running it.
 
  hadoop-common $ mvn clean
  hadoop-common $ mvn install -DskipTests
  hadoop-common $ pushd ../hdfs
  hdfs $ ant clean
  hdfs $ ant -Dresolvers=internal jar
  hdfs $ ant run-test-hdfs-fault-inject
 
   I think you should pass -Dresolver=internal to the
  run-test-hdfs-fault-inject target as well
 
  Thanks,
  giri
 
 
  Thanks,
  Eli
 
 
 
 https://repository.apache.org/content/groups/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT
 
  hadoop-common-0.23.0-20110815.215733-266-tests.jar
 
  hadoop-common-0.23.0-20110815.215733-266.jar
 
 
 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Commit/837/console
 
  [ivy:resolve] .. (0kB)
 
  [ivy:resolve]   [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar
 (1011ms)
  [ivy:resolve] downloading
 
 https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110815.215733-266.jar
  ...
 
  [ivy:resolve]
 
  (1667kB)
  [ivy:resolve] .. (0kB)
  [ivy:resolve]   [SUCCESSFUL ]
  org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar
  (1549ms)
 
  ivy-retrieve-common:
  [ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use
  'ivy.settings.file' instead
  [ivy:cachepath

[jira] [Created] (HDFS-2261) hdfs trunk is broken with -compile-fault-inject ant target

2011-08-15 Thread Giridharan Kesavan (JIRA)
hdfs trunk is broken with -compile-fault-inject ant target
--

 Key: HDFS-2261
 URL: https://issues.apache.org/jira/browse/HDFS-2261
 Project: Hadoop HDFS
  Issue Type: Bug
 Environment: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/834/console
-compile-fault-inject ant target 

Reporter: Giridharan Kesavan



-compile-fault-inject:
 [echo] Start weaving aspects in place
 [iajc] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/java/org/apache/hadoop/hdfs/HftpFileSystem.java:269
 [error] The method encodeQueryValue(String) is undefined for the type 
ServletUtil
 [iajc] ServletUtil.encodeQueryValue(ugi.getShortUserName()));
..

  [iajc] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/system/aop/org/apache/hadoop/hdfs/server/namenode/NameNodeAspect.aj:50
 [warning] advice defined in 
org.apache.hadoop.hdfs.server.namenode.NameNodeAspect has not been applied 
[Xlint:adviceDidNotMatch]
 [iajc] 
 [iajc] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/system/aop/org/apache/hadoop/hdfs/server/datanode/DataNodeAspect.aj:43
 [warning] advice defined in 
org.apache.hadoop.hdfs.server.datanode.DataNodeAspect has not been applied 
[Xlint:adviceDidNotMatch]
 [iajc] 
 [iajc] 
 [iajc] 18 errors, 4 warnings

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:222:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:203:
 The following error occurred while executing this line:
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/src/test/aop/build/aop.xml:90:
 compile errors: 18

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Failing trunk builds for HDFS.

2011-08-15 Thread Giridharan Kesavan
Todd,

Could you please take a look at this ?

https://issues.apache.org/jira/browse/HDFS-2261


-Giri
On Mon, Aug 15, 2011 at 3:24 PM, Todd Lipcon t...@cloudera.com wrote:

 Seems like some of it is a build issue where it can't find ant.

 The other is the following:
 https://issues.apache.org/jira/browse/HADOOP-7545
 Please review.

 Thanks
 -Todd

 On Mon, Aug 15, 2011 at 2:54 PM, Mahadev Konar maha...@hortonworks.com
 wrote:
  Hi folks,
   Can anyone take a look at the hdfs builds? Seems to be failing:
 
  https://builds.apache.org/job/Hadoop-Hdfs-trunk/
 
  thanks
  mahadev
 



 --
 Todd Lipcon
 Software Engineer, Cloudera



Re: Failing trunk builds for HDFS.

2011-08-15 Thread Giridharan Kesavan
Hi Eli,

your are right Im talking about the apache jenkins hdfs build failures;

Im pretty sure hdfs is picking the latest hadoop common jars. I
verified this with the apache repo as well.

https://repository.apache.org/content/groups/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT

hadoop-common-0.23.0-20110815.215733-266-tests.jar

hadoop-common-0.23.0-20110815.215733-266.jar

https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Commit/837/console

[ivy:resolve] .. (0kB)

[ivy:resolve]   [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar (1011ms)
[ivy:resolve] downloading
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110815.215733-266.jar
...

[ivy:resolve] 

(1667kB)
[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ]
org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar
(1549ms)

ivy-retrieve-common:
[ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use
'ivy.settings.file' instead
[ivy:cachepath] :: loading settings :: file =
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

ivy-resolve-hdfs:

ivy-retrieve-hdfs:

ivy-resolve-test:

[ivy:resolve] downloading
https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110815.215733-266-tests.jar
...
[ivy:resolve] 

(876kB)

[ivy:resolve] .. (0kB)
[ivy:resolve]   [SUCCESSFUL ]
org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar(tests)
(875ms)


On Mon, Aug 15, 2011 at 3:33 PM, Eli Collins e...@cloudera.com wrote:

 Hey Giri,

 This looks like a similar issue to what was hitting the main Jenkins
 job, the Hdfs job isn't picking up the latest bits from common.

 Thanks,
 Eli

 On Mon, Aug 15, 2011 at 3:27 PM, Giridharan Kesavan
 gkesa...@hortonworks.com wrote:
  Todd,
 
  Could you please take a look at this ?
 
  https://issues.apache.org/jira/browse/HDFS-2261
 
 
  -Giri
  On Mon, Aug 15, 2011 at 3:24 PM, Todd Lipcon t...@cloudera.com wrote:
 
  Seems like some of it is a build issue where it can't find ant.
 
  The other is the following:
  https://issues.apache.org/jira/browse/HADOOP-7545
  Please review.
 
  Thanks
  -Todd
 
  On Mon, Aug 15, 2011 at 2:54 PM, Mahadev Konar maha...@hortonworks.com
  wrote:
   Hi folks,
    Can anyone take a look at the hdfs builds? Seems to be failing:
  
   https://builds.apache.org/job/Hadoop-Hdfs-trunk/
  
   thanks
   mahadev
  
 
 
 
  --
  Todd Lipcon
  Software Engineer, Cloudera
 
 


Re: Failing trunk builds for HDFS.

2011-08-15 Thread Giridharan Kesavan
On Mon, Aug 15, 2011 at 3:52 PM, Eli Collins e...@cloudera.com wrote:
 On Mon, Aug 15, 2011 at 3:45 PM, Giridharan Kesavan
 gkesa...@hortonworks.com wrote:
 Hi Eli,

 your are right Im talking about the apache jenkins hdfs build failures;

 Im pretty sure hdfs is picking the latest hadoop common jars. I
 verified this with the apache repo as well.

 How are you building? The method that its claiming doesn't exist
 definitely does.

target doesn't exist in the build.xml; its part of the fault injection
framework and is imported from trunk/hdfs/src/test/aop/build/aop.xml

you can see this import in the build.xml file
  import file=${test.src.dir}/aop/build/aop.xml/


 The following works on trunk so I think it's  an issue with how
 Jenkins is running it.

 hadoop-common $ mvn clean
 hadoop-common $ mvn install -DskipTests
 hadoop-common $ pushd ../hdfs
 hdfs $ ant clean
 hdfs $ ant -Dresolvers=internal jar
 hdfs $ ant run-test-hdfs-fault-inject

 I think you should pass -Dresolver=internal to the
run-test-hdfs-fault-inject target as well

Thanks,
giri


 Thanks,
 Eli


 https://repository.apache.org/content/groups/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT

 hadoop-common-0.23.0-20110815.215733-266-tests.jar

 hadoop-common-0.23.0-20110815.215733-266.jar

 https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Commit/837/console

 [ivy:resolve] .. (0kB)

 [ivy:resolve]   [SUCCESSFUL ] org.apache.hadoop#avro;1.3.2!avro.jar (1011ms)
 [ivy:resolve] downloading
 https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110815.215733-266.jar
 ...

 [ivy:resolve] 
 
 (1667kB)
 [ivy:resolve] .. (0kB)
 [ivy:resolve]   [SUCCESSFUL ]
 org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar
 (1549ms)

 ivy-retrieve-common:
 [ivy:cachepath] DEPRECATED: 'ivy.conf.file' is deprecated, use
 'ivy.settings.file' instead
 [ivy:cachepath] :: loading settings :: file =
 /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Commit/trunk/ivy/ivysettings.xml

 ivy-resolve-hdfs:

 ivy-retrieve-hdfs:

 ivy-resolve-test:

 [ivy:resolve] downloading
 https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110815.215733-266-tests.jar
 ...
 [ivy:resolve] 
 
 (876kB)

 [ivy:resolve] .. (0kB)
 [ivy:resolve]   [SUCCESSFUL ]
 org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar(tests)
 (875ms)


 On Mon, Aug 15, 2011 at 3:33 PM, Eli Collins e...@cloudera.com wrote:

 Hey Giri,

 This looks like a similar issue to what was hitting the main Jenkins
 job, the Hdfs job isn't picking up the latest bits from common.

 Thanks,
 Eli

 On Mon, Aug 15, 2011 at 3:27 PM, Giridharan Kesavan
 gkesa...@hortonworks.com wrote:
  Todd,
 
  Could you please take a look at this ?
 
  https://issues.apache.org/jira/browse/HDFS-2261
 
 
  -Giri
  On Mon, Aug 15, 2011 at 3:24 PM, Todd Lipcon t...@cloudera.com wrote:
 
  Seems like some of it is a build issue where it can't find ant.
 
  The other is the following:
  https://issues.apache.org/jira/browse/HADOOP-7545
  Please review.
 
  Thanks
  -Todd
 
  On Mon, Aug 15, 2011 at 2:54 PM, Mahadev Konar maha...@hortonworks.com
  wrote:
   Hi folks,
    Can anyone take a look at the hdfs builds? Seems to be failing:
  
   https://builds.apache.org/job/Hadoop-Hdfs-trunk/
  
   thanks
   mahadev
  
 
 
 
  --
  Todd Lipcon
  Software Engineer, Cloudera
 
 




[jira] Resolved: (HDFS-1551) fix the pom template's version

2010-12-21 Thread Giridharan Kesavan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giridharan Kesavan resolved HDFS-1551.
--

   Resolution: Fixed
Fix Version/s: 0.23.0

Thanks Nigel.

 fix the pom template's version
 --

 Key: HDFS-1551
 URL: https://issues.apache.org/jira/browse/HDFS-1551
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
 Fix For: 0.23.0

 Attachments: hdfs-1551.patch


 pom templates in the ivy folder should be updated to the latest version 
 hadoo-common dependencies.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HDFS-1193) -mvn-system-deploy target is broken which inturn fails the mvn-deploy task leading to unstable mapreduce build.

2010-06-08 Thread Giridharan Kesavan (JIRA)
-mvn-system-deploy target is broken which inturn fails the mvn-deploy task 
leading to unstable mapreduce build.
---

 Key: HDFS-1193
 URL: https://issues.apache.org/jira/browse/HDFS-1193
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Giridharan Kesavan



-mvn-system-deploy:
[artifact:install-provider] Installing provider: 
org.apache.maven.wagon:wagon-http:jar:1.0-beta-2:runtime
[artifact:deploy] Deploying to 
https://repository.apache.org/content/repositories/snapshots
[artifact:deploy] [INFO] Retrieving previous build number from 
apache.snapshots.https
[artifact:deploy] Uploading: 
org/apache/hadoop/hadoop-hdfs-instrumented/0.22.0-SNAPSHOT/hadoop-hdfs-instrumented-0.22.0-20100608.071421-8.jar
 to apache.snapshots.https
[artifact:deploy] Uploaded 990K
[artifact:deploy] [INFO] Retrieving previous metadata from 
apache.snapshots.https
[artifact:deploy] [INFO] Uploading repository metadata for: 'snapshot 
org.apache.hadoop:hadoop-hdfs-instrumented:0.22.0-SNAPSHOT'
[artifact:deploy] [INFO] Retrieving previous metadata from 
apache.snapshots.https
[artifact:deploy] [INFO] Uploading repository metadata for: 'artifact 
org.apache.hadoop:hadoop-hdfs-instrumented'
[artifact:deploy] [INFO] Uploading project information for 
hadoop-hdfs-instrumented 0.22.0-20100608.071421-8
[artifact:deploy] [INFO] Retrieving previous build number from 
apache.snapshots.https
[artifact:deploy] Uploading: 
org/apache/hadoop/hadoop-hdfs-instrumented/0.22.0-SNAPSHOT/hadoop-hdfs-instrumented-0.22.0-20100608.071421-8-sources.jar
 to apache.snapshots.https
[artifact:deploy] Uploaded 610K
[artifact:deploy] An error has occurred while processing the Maven artifact 
tasks.
[artifact:deploy]  Diagnosis:
[artifact:deploy] 
[artifact:deploy] Invalid reference: 'hadoop.hdfs.instrumented.test'


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Sendmail restarted for patch process

2010-04-14 Thread Giridharan Kesavan
Jira_cli is still not able to post comments on to jira.
Asfinfra is notified of this, they are working on it,.

Thanks,
Giri


On 13/04/10 11:42 AM, Giridharan Kesavan gkesa...@yahoo-inc.com wrote:

Folks,

It looks like sendmail process on hudson.zones crashed and all the test patch 
jobs submitted in the last two days didn't get through.

Please resubmit your patch to the test-patch job.

-Giri


Sendmail restartd for patch process

2010-04-13 Thread Giridharan Kesavan
Folks,

It looks like sendmail process on hudson.zones crashed and all the test patch 
jobs submitted in the last two days didn't get through.

Please resubmit your patch to the test-patch job.

-Giri


Apache hudson build machine

2010-03-26 Thread Giridharan Kesavan
Hello,

There seem to be an issue with the nfs mount on the apache build servers from 
h1.grid.sp2.yahoo.net
Most of the hudson slaves went offline. I'm workin with the ops to bring the 
hudson slaves h[1-9] back online.

Thanks,
Giri


Re: 0.21.0-snapshot depends on hadoop-core 0.22.0

2010-01-04 Thread Giridharan Kesavan


I guess its a typo in the backporting patch from trunk to 0.21 branch.
I ve uploaded patch for this jira.

Thanks

-G

On 04/01/10 5:14 AM, Kay Kay kaykay.uni...@gmail.com wrote:

 reason why 0.21.0 of hdfs needs hadoop-core 0.22.0 . Thanks.



Publishing hadoop artifacts - Apache Nexus Repo

2009-11-10 Thread Giridharan Kesavan

Hadoop-Common-trunk-Commit and Hadoop-Hdfs-trunk-Commit jobs on hudson is 
configured to publish core, core-test , hdfs and hdfs-test jars resp. to the 
apache nexus snapshot repository.

This means hdfs will always be build with the latest published common jars 
available in the apache nexus snapshot repo.

Thanks,
Giri


[jira] Created: (HDFS-623) hdfs jar-test ant target fails with the latest commons jar's from the common trunk

2009-09-15 Thread Giridharan Kesavan (JIRA)
hdfs jar-test ant target fails with the latest commons jar's from the common 
trunk
--

 Key: HDFS-623
 URL: https://issues.apache.org/jira/browse/HDFS-623
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Giridharan Kesavan


[javac]
somelocation/src/test/hdfs/org/apache/hadoop/hdfs/server/namenode/TestReplicationPolicy.java:67:
 incompatible types
[javac] found   : 
org.apache.hadoop.hdfs.server.namenode.ReplicationTargetChooser
[javac] required: 
org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy
[javac] replicator = fsNamesystem.blockManager.replicator;
[javac]   ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 5 errors


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



hdfs build fails with latest trunks hadoop-core.jar

2009-07-16 Thread Giridharan Kesavan
Hdfs build fails as we compile it with the latest hadoop-core-0.21.0-dev.jar 
build from latest common/trunk

compile-hdfs-classes:
[javac] Compiling 151 source files to 
/home/gkesavan/hdfs-trunk/build/classes
[javac] 
/home/gkesavan/hdfs-trunk/src/java/org/apache/hadoop/hdfs/DFSClient.java:177: 
cannot find symbol
[javac] symbol  : method getTimeout(org.apache.hadoop.conf.Configuration)
[javac] location: class org.apache.hadoop.ipc.Client
[javac] this.hdfsTimeout = Client.getTimeout(conf);
[javac]  ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 1 error

Can someone take a look?

Thanks,
Giri


[jira] Created: (HDFS-484) bin-package and package doesnt seem to package any jar file

2009-07-10 Thread Giridharan Kesavan (JIRA)
bin-package and package doesnt seem to package any jar file
---

 Key: HDFS-484
 URL: https://issues.apache.org/jira/browse/HDFS-484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.