Build failed in Jenkins: Hadoop-Common-0.23-Build #259

2012-05-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/259/changes

Changes:

[tgraves] merge -r 1340880:1340881 from branch-2. FIXES: MAPREDUCE-4269

--
[...truncated 12429 lines...]
  [javadoc] Loading source files for package org.apache.hadoop.fs.ftp...
  [javadoc] Loading source files for package org.apache.hadoop.fs.kfs...
  [javadoc] Loading source files for package org.apache.hadoop.fs.local...
  [javadoc] Loading source files for package org.apache.hadoop.fs.permission...
  [javadoc] Loading source files for package org.apache.hadoop.fs.s3...
  [javadoc] Loading source files for package org.apache.hadoop.fs.s3native...
  [javadoc] Loading source files for package org.apache.hadoop.fs.shell...
  [javadoc] Loading source files for package org.apache.hadoop.fs.viewfs...
  [javadoc] Loading source files for package org.apache.hadoop.http...
  [javadoc] Loading source files for package org.apache.hadoop.http.lib...
  [javadoc] Loading source files for package org.apache.hadoop.io...
  [javadoc] Loading source files for package org.apache.hadoop.io.compress...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.bzip2...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.lz4...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.snappy...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.compress.zlib...
  [javadoc] Loading source files for package org.apache.hadoop.io.file.tfile...
  [javadoc] Loading source files for package org.apache.hadoop.io.nativeio...
  [javadoc] Loading source files for package org.apache.hadoop.io.retry...
  [javadoc] Loading source files for package org.apache.hadoop.io.serializer...
  [javadoc] Loading source files for package 
org.apache.hadoop.io.serializer.avro...
  [javadoc] Loading source files for package org.apache.hadoop.ipc...
  [javadoc] Loading source files for package org.apache.hadoop.ipc.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.jmx...
  [javadoc] Loading source files for package org.apache.hadoop.log...
  [javadoc] Loading source files for package org.apache.hadoop.log.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.file...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics.ganglia...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.jvm...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.spi...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.util...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.annotation...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.filter...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.sink.ganglia...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.source...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.util...
  [javadoc] Loading source files for package org.apache.hadoop.net...
  [javadoc] Loading source files for package org.apache.hadoop.record...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler.ant...
  [javadoc] Loading source files for package 
org.apache.hadoop.record.compiler.generated...
  [javadoc] Loading source files for package org.apache.hadoop.record.meta...
  [javadoc] Loading source files for package org.apache.hadoop.security...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.authorize...
  [javadoc] Loading source files for package org.apache.hadoop.security.token...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation...
  [javadoc] Loading source files for package org.apache.hadoop.tools...
  [javadoc] Loading source files for package org.apache.hadoop.util...
  [javadoc] Loading source files for package org.apache.hadoop.util.bloom...
  [javadoc] Loading source files for package org.apache.hadoop.util.hash...
  [javadoc] 2 errors
 [xslt] Processing 
https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-common/target/findbugsXml.xml
 to 
https://builds.apache.org/job/Hadoop-Common-0.23-Build/ws/trunk/hadoop-common-project/hadoop-common/target/site/findbugs.html
 [xslt] Loading stylesheet 
/home/jenkins/tools/findbugs/latest/src/xsl/default.xsl
[INFO] Executed tasks
[INFO] 
[INFO] --- 

Re: Need Urgent Help on Architecture

2012-05-21 Thread Robert Evans
All attachments are stripped when sent to the mailing list.  You will need to 
use another service if you want us to see the diagram.


On 5/18/12 12:50 PM, samir das mohapatra samir.help...@gmail.com wrote:

Hi harsh,

   I wanted to implement one Workflow within the MAPPER. I am Sharing my 
concept through the Architecture Diagram, Please correct me if I am wrong
   and suggest me any Good Approach for that

   Many thanks  in advance)

  Thanks
samir



[jira] [Resolved] (HADOOP-7069) Replace forrest with supported framework

2012-05-21 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins resolved HADOOP-7069.
-

   Resolution: Fixed
Fix Version/s: (was: 0.24.0)
   2.0.0

 Replace forrest with supported framework
 

 Key: HADOOP-7069
 URL: https://issues.apache.org/jira/browse/HADOOP-7069
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jakob Homan
 Fix For: 2.0.0


 It's time to burn down the forrest.  Apache forrest, which is used to 
 generate the documentation for all three subprojects, has not had a release 
 in several years (0.8, the version we use was released April 18, 2007), and 
 requires JDK5, which was EOL'ed in November 2009.  Since it doesn't seem 
 likely Forrest will be developed any more, and JDK5 is not shipped with 
 recent OSX versions, or included by default in most linux distros, we should 
 look to find a new documentation system and convert the current docs to it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-21 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reopened HADOOP-8408:



Re-opening to amend the patch per Daryn's feedback.

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1

 Attachments: HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8420) saveVersions.sh not working on Windows

2012-05-21 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8420:
--

 Summary: saveVersions.sh not working on Windows
 Key: HADOOP-8420
 URL: https://issues.apache.org/jira/browse/HADOOP-8420
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
 Fix For: 1.1.0


This script is executed during build time to generate version number 
information for Hadoop core. This version number is consumed via API's by Hive 
etc to determine compatibility with Hadoop versions. Currently, because of 
dependency on awk, cut etc utilities, this script does not run successfully and 
version information is not available.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8421) Verify and fix build of c++ targets in Hadoop on Windows

2012-05-21 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8421:
--

 Summary: Verify and fix build of c++ targets in Hadoop on Windows
 Key: HADOOP-8421
 URL: https://issues.apache.org/jira/browse/HADOOP-8421
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
 Fix For: 1.1.0


There are a bunch of c++ files that are not compiled by default for legacy 
reasons. They represent important functionality. We need to make sure they 
build on Windows.
There is some dependency on autoconf/autoreconf.
HADOOP-8368 ideas could be used in here.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8422) FileSystem#getDefaultBlockSize

2012-05-21 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8422:
---

 Summary: FileSystem#getDefaultBlockSize 
 Key: HADOOP-8422
 URL: https://issues.apache.org/jira/browse/HADOOP-8422
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Eli Collins




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




cmake

2012-05-21 Thread Colin McCabe
Hi all,

We'd like to use CMake instead of autotools to build native (C/C++) code in
Hadoop.  There are a lot of reasons to want to do this.  For one thing, it is
not feasible to use autotools on the Windows platform, because it depends on
UNIX shell scripts, the m4 macro processor, and some other pieces of
infrastructure which are not present on Windows.

For another thing, CMake builds are substantially simpler and faster, because
there is only one layer of generated code.  With autotools, you have automake
generating m4 code which autoconf reads, which it uses to generate a UNIX shell
script, which then generates another UNIX shell script, which eventually
generates Makefiles.  CMake simply generates Makefiles out of CMakeLists.txt
files-- much simpler to understand and debug, and much faster.
CMake is a lot easier to learn.

automake error messages can be very, very confusing.  This is because you are
essentially debugging a pile of shell scripts and macros, rather than a
coherent whole.  So you see error messages like autoreconf: cannot empty
/tmp/ar0.4849: Is a directory or Can't locate object method path via
package Autom4te... and so forth.  CMake error messages come from the CMake
application and they almost always immediately point you to the problem.

From a build point of view, the net result of adopting CMake would be that you
would no longer need automake and related programs installed to build the
native parts of Hadoop.  Instead, you would need CMake installed.  CMake is
packaged by Red Hat, even in RHEL5, so it shouldn't be difficult to install
locally.  It's also available for Mac OS X and Windows, as I mentioned earlier.

The JIRA for this work is at https://issues.apache.org/jira/browse/HADOOP-8368
Thanks for reading.

sincerely,
Colin


Re: cmake

2012-05-21 Thread Eli Collins
+1   having a build tool that supports multiple platforms is worth the
dependency. I've also had good experiences with cmake.


On Mon, May 21, 2012 at 6:00 PM, Colin McCabe cmcc...@alumni.cmu.edu wrote:
 Hi all,

 We'd like to use CMake instead of autotools to build native (C/C++) code in
 Hadoop.  There are a lot of reasons to want to do this.  For one thing, it is
 not feasible to use autotools on the Windows platform, because it depends on
 UNIX shell scripts, the m4 macro processor, and some other pieces of
 infrastructure which are not present on Windows.

 For another thing, CMake builds are substantially simpler and faster, because
 there is only one layer of generated code.  With autotools, you have automake
 generating m4 code which autoconf reads, which it uses to generate a UNIX 
 shell
 script, which then generates another UNIX shell script, which eventually
 generates Makefiles.  CMake simply generates Makefiles out of CMakeLists.txt
 files-- much simpler to understand and debug, and much faster.
 CMake is a lot easier to learn.

 automake error messages can be very, very confusing.  This is because you are
 essentially debugging a pile of shell scripts and macros, rather than a
 coherent whole.  So you see error messages like autoreconf: cannot empty
 /tmp/ar0.4849: Is a directory or Can't locate object method path via
 package Autom4te... and so forth.  CMake error messages come from the CMake
 application and they almost always immediately point you to the problem.

 From a build point of view, the net result of adopting CMake would be that you
 would no longer need automake and related programs installed to build the
 native parts of Hadoop.  Instead, you would need CMake installed.  CMake is
 packaged by Red Hat, even in RHEL5, so it shouldn't be difficult to install
 locally.  It's also available for Mac OS X and Windows, as I mentioned 
 earlier.

 The JIRA for this work is at https://issues.apache.org/jira/browse/HADOOP-8368
 Thanks for reading.

 sincerely,
 Colin