[jira] [Updated] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8415:


   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Many thanks for contributing Jan van der Lugt! :)

 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8434) TestConfiguration currently has no tests for direct setter methods

2012-05-25 Thread Harsh J (JIRA)
Harsh J created HADOOP-8434:
---

 Summary: TestConfiguration currently has no tests for direct 
setter methods
 Key: HADOOP-8434
 URL: https://issues.apache.org/jira/browse/HADOOP-8434
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Harsh J


Jan van der Lugt noticed this on HADOOP-8415.

bq. Just FYI, there are no tests for setFloat, setInt, setLong, etc. Might be 
better to add all of those at the same time.

Would be good to have (coverage-wise first, regression-wise second) explicit 
tests for the each of the setter methods, although other projects' tests do 
test this extensively.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283149#comment-13283149
 ] 

Harsh J commented on HADOOP-8415:
-

I filed HADOOP-8434 for the setter tests coverage. Thanks again.

 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8323:


Issue Type: Improvement  (was: Bug)

This is no longer a bug (since the bug was reverted). So marking as an 
improvement.

 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8323:


Target Version/s: 2.0.1-alpha, 3.0.0  (was: 2.0.0-alpha, 3.0.0)

 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8435) Propdel all svn:mergeinfo

2012-05-25 Thread Harsh J (JIRA)
Harsh J created HADOOP-8435:
---

 Summary: Propdel all svn:mergeinfo
 Key: HADOOP-8435
 URL: https://issues.apache.org/jira/browse/HADOOP-8435
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Harsh J
Assignee: Harsh J


TortoiseSVN/some versions of svn have added several mergeinfo props to Hadoop's 
svn files/dirs (list below).

We should propdel that unneeded property, and fix it up. This otherwise causes 
pain to those who backport with a simple root-dir-down command (svn merge -c 
num url/path).

We should also make sure to update the HowToCommit page on advising to avoid 
mergeinfo additions to prevent this from reoccurring.

Files affected are, from my propdel revert output earlier today:
{code}
Reverted '.'
Reverted 'hadoop-hdfs-project'
Reverted 'hadoop-hdfs-project/hadoop-hdfs'
Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs'
Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/java'
Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode'
Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs'
Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary'
Reverted 'hadoop-hdfs-project/hadoop-hdfs/src/main/native'
Reverted 'hadoop-mapreduce-project'
Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site'
Reverted 'hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt'
Reverted 'hadoop-mapreduce-project/conf'
Reverted 'hadoop-mapreduce-project/CHANGES.txt'
Reverted 'hadoop-mapreduce-project/src/test/mapred'
Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs'
Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs'
Reverted 'hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc'
Reverted 'hadoop-mapreduce-project/src/contrib'
Reverted 'hadoop-mapreduce-project/src/contrib/eclipse-plugin'
Reverted 'hadoop-mapreduce-project/src/contrib/block_forensics'
Reverted 'hadoop-mapreduce-project/src/contrib/index'
Reverted 'hadoop-mapreduce-project/src/contrib/data_join'
Reverted 'hadoop-mapreduce-project/src/contrib/build-contrib.xml'
Reverted 'hadoop-mapreduce-project/src/contrib/vaidya'
Reverted 'hadoop-mapreduce-project/src/contrib/build.xml'
Reverted 'hadoop-mapreduce-project/src/java'
Reverted 'hadoop-mapreduce-project/src/webapps/job'
Reverted 'hadoop-mapreduce-project/src/c++'
Reverted 'hadoop-mapreduce-project/src/examples'
Reverted 'hadoop-mapreduce-project/hadoop-mapreduce-examples'
Reverted 
'hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml'
Reverted 'hadoop-mapreduce-project/bin'
Reverted 'hadoop-common-project'
Reverted 'hadoop-common-project/hadoop-common'
Reverted 'hadoop-common-project/hadoop-common/src/test/core'
Reverted 'hadoop-common-project/hadoop-common/src/main/java'
Reverted 'hadoop-common-project/hadoop-common/src/main/docs'
Reverted 'hadoop-common-project/hadoop-auth'
Reverted 'hadoop-project'
Reverted 'hadoop-project/src/site'
{code}

Proposed set of fix (from 
http://stackoverflow.com/questions/767418/remove-unnecessary-svnmergeinfo-properties):
{code}
svn propdel svn:mergeinfo -R
svn revert .
svn commit -m appropriate message
{code}

(To be done on branch-2 and trunk both)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8323:


  Resolution: Fixed
   Fix Version/s: 2.0.1-alpha
Target Version/s:   (was: 2.0.1-alpha, 3.0.0)
  Status: Resolved  (was: Patch Available)

Committed these trivial additions to branch-2 and trunk.

 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8436) NPE In getLocalPathForWrite ( path, conf ) when dfs.client.buffer.dir not configured

2012-05-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8436:


 Summary: NPE In getLocalPathForWrite ( path, conf ) when 
dfs.client.buffer.dir not configured
 Key: HADOOP-8436
 URL: https://issues.apache.org/jira/browse/HADOOP-8436
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Brahma Reddy Battula


Call  dirAllocator.getLocalPathForWrite ( path , conf );

without configuring  dfs.client.buffer.dir..
{noformat}
java.lang.NullPointerException
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:261)
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:365)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:134)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:113)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8436) NPE In getLocalPathForWrite ( path, conf ) when dfs.client.buffer.dir not configured

2012-05-25 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HADOOP-8436:


Assignee: Brahma Reddy Battula

 NPE In getLocalPathForWrite ( path, conf ) when dfs.client.buffer.dir not 
 configured
 

 Key: HADOOP-8436
 URL: https://issues.apache.org/jira/browse/HADOOP-8436
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 Call  dirAllocator.getLocalPathForWrite ( path , conf );
 without configuring  dfs.client.buffer.dir..
 {noformat}
 java.lang.NullPointerException
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:261)
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:365)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:134)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:113)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8432) SH script syntax errors

2012-05-25 Thread sergio kosik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sergio kosik resolved HADOOP-8432.
--

  Resolution: Not A Problem
Release Note: shure, with bash it's works.

 SH script syntax errors
 ---

 Key: HADOOP-8432
 URL: https://issues.apache.org/jira/browse/HADOOP-8432
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0-alpha
 Environment: ubuntu 12, Oracle_JDK 1.7.0_03,
 env. variables setted to:
 export JAVA_HOME=/mnt/dataStorage/storage/local/glassfish3/jdk7
 export PATH=$PATH:$JAVA_HOME/bin
 export HADOOP_INSTALL=/mnt/dataStorage/storage/local/hadoop-2.0.0-alpha
 export PATH=$PATH:$HADOOP_INSTALL/bin
 export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
Reporter: sergio kosik
Priority: Blocker
 Fix For: 2.0.0-alpha


 Hi, Everyone
 I just can't start with new binary release of hadoop with following CLI 
 command:
 sh $HADOOP_INSTALL/sbin/start-dfs.sh
 ... /hadoop-2.0.0-alpha/sbin/start-dfs.sh: 78: ... 
 /hadoop-2.0.0-alpha/sbin/../libexec/hadoop-config.sh: Syntax error: word 
 unexpected (expecting ))
 Inside the script start-dfs.sh there are multiple syntax wrongs. Could you 
 fix it?
 Regards

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8432) SH script syntax errors

2012-05-25 Thread sergio kosik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sergio kosik updated HADOOP-8432:
-

Release Note: truly, with bash it's works.  (was: shure, with bash it's 
works.)

 SH script syntax errors
 ---

 Key: HADOOP-8432
 URL: https://issues.apache.org/jira/browse/HADOOP-8432
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0-alpha
 Environment: ubuntu 12, Oracle_JDK 1.7.0_03,
 env. variables setted to:
 export JAVA_HOME=/mnt/dataStorage/storage/local/glassfish3/jdk7
 export PATH=$PATH:$JAVA_HOME/bin
 export HADOOP_INSTALL=/mnt/dataStorage/storage/local/hadoop-2.0.0-alpha
 export PATH=$PATH:$HADOOP_INSTALL/bin
 export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
Reporter: sergio kosik
Priority: Blocker
 Fix For: 2.0.0-alpha


 Hi, Everyone
 I just can't start with new binary release of hadoop with following CLI 
 command:
 sh $HADOOP_INSTALL/sbin/start-dfs.sh
 ... /hadoop-2.0.0-alpha/sbin/start-dfs.sh: 78: ... 
 /hadoop-2.0.0-alpha/sbin/../libexec/hadoop-config.sh: Syntax error: word 
 unexpected (expecting ))
 Inside the script start-dfs.sh there are multiple syntax wrongs. Could you 
 fix it?
 Regards

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths

2012-05-25 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HADOOP-8437:


 Summary: getLocalPathForWrite is not throwing any expection for 
invalid paths
 Key: HADOOP-8437
 URL: https://issues.apache.org/jira/browse/HADOOP-8437
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.1-alpha
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf );
Here it will not thrown any exception but earlier version it used throw.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8437) getLocalPathForWrite is not throwing any expection for invalid paths

2012-05-25 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283223#comment-13283223
 ] 

Brahma Reddy Battula commented on HADOOP-8437:
--

In earlier version
{code}
localDirs = dirs.toArray(new String[dirs.size()]);
dirDF = dfList.toArray(new DF[dirs.size()]);
savedLocalDirs = newLocalDirs;
if (0 == dirs.size()) {
throw new IOException(No dirs to select.Total dirs size is 0);
}
// randomize the first disk picked in the round-robin selection
dirNumLastAccessed = dirIndexRandomizer.nextInt(dirs.size());
dirNumLastAccessedforKnownSize = dirNumLastAccessed;
{code}

Here it is throwing IOException saying that No dirs to select.Total dirs size 
is 0

But in brnach2 and trunk...code is like follwoing..

{code}
localDirs = dirs.toArray(new String[dirs.size()]);
dirDF = dfList.toArray(new DF[dirs.size()]);
savedLocalDirs = newLocalDirs;
  
// randomize the first disk picked in the round-robin selection 
dirNumLastAccessed = dirIndexRandomizer.nextInt(dirs.size());
{code}


Here follwoing check is removed..

{code}
if (0 == dirs.size()) {
throw new IOException(No dirs to select.Total dirs size is 0);
}
{code}

 getLocalPathForWrite is not throwing any expection for invalid paths
 

 Key: HADOOP-8437
 URL: https://issues.apache.org/jira/browse/HADOOP-8437
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.1-alpha
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula

 call dirAllocator.getLocalPathForWrite ( /InvalidPath, conf );
 Here it will not thrown any exception but earlier version it used throw.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-05-25 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-8438:
-

 Summary: hadoop-validate-setup.sh refers to examples jar file 
which doesn't exist
 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Devaraj K
Assignee: Devaraj K


hadoop-validate-setup.sh is trying to find the file with the name 
hadoop-examples-*.jar and it is failing to find because the examples jar is 
renamed to hadoop-mapreduce-examples-*.jar.

{code:xml}
linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # ./hadoop-validate-setup.sh
find: `/usr/share/hadoop': No such file or directory
Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
'/usr/share/hadoop'
linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-05-25 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HADOOP-8438:
--

Description: 
hadoop-validate-setup.sh is trying to find the file with the name 
hadoop-examples-\*.jar and it is failing to find because the examples jar is 
renamed to hadoop-mapreduce-examples-\*.jar.

{code:xml}
linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # ./hadoop-validate-setup.sh
find: `/usr/share/hadoop': No such file or directory
Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
'/usr/share/hadoop'
linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
{code}


  was:
hadoop-validate-setup.sh is trying to find the file with the name 
hadoop-examples-*.jar and it is failing to find because the examples jar is 
renamed to hadoop-mapreduce-examples-*.jar.

{code:xml}
linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # ./hadoop-validate-setup.sh
find: `/usr/share/hadoop': No such file or directory
Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
'/usr/share/hadoop'
linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
{code}



 hadoop-validate-setup.sh refers to examples jar file which doesn't exist
 

 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Devaraj K
Assignee: Devaraj K

 hadoop-validate-setup.sh is trying to find the file with the name 
 hadoop-examples-\*.jar and it is failing to find because the examples jar is 
 renamed to hadoop-mapreduce-examples-\*.jar.
 {code:xml}
 linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # 
 ./hadoop-validate-setup.sh
 find: `/usr/share/hadoop': No such file or directory
 Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
 '/usr/share/hadoop'
 linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-05-25 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HADOOP-8438:
--

Attachment: HADOOP-8438.patch

 hadoop-validate-setup.sh refers to examples jar file which doesn't exist
 

 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: HADOOP-8438.patch


 hadoop-validate-setup.sh is trying to find the file with the name 
 hadoop-examples-\*.jar and it is failing to find because the examples jar is 
 renamed to hadoop-mapreduce-examples-\*.jar.
 {code:xml}
 linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # 
 ./hadoop-validate-setup.sh
 find: `/usr/share/hadoop': No such file or directory
 Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
 '/usr/share/hadoop'
 linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-05-25 Thread Devaraj K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj K updated HADOOP-8438:
--

Affects Version/s: 3.0.0
   2.0.1-alpha
   Status: Patch Available  (was: Open)

I have updated with the patch to fix the issue.

 hadoop-validate-setup.sh refers to examples jar file which doesn't exist
 

 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: HADOOP-8438.patch


 hadoop-validate-setup.sh is trying to find the file with the name 
 hadoop-examples-\*.jar and it is failing to find because the examples jar is 
 renamed to hadoop-mapreduce-examples-\*.jar.
 {code:xml}
 linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # 
 ./hadoop-validate-setup.sh
 find: `/usr/share/hadoop': No such file or directory
 Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
 '/usr/share/hadoop'
 linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283297#comment-13283297
 ] 

Hadoop QA commented on HADOOP-8438:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529697/HADOOP-8438.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1035//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1035//console

This message is automatically generated.

 hadoop-validate-setup.sh refers to examples jar file which doesn't exist
 

 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: HADOOP-8438.patch


 hadoop-validate-setup.sh is trying to find the file with the name 
 hadoop-examples-\*.jar and it is failing to find because the examples jar is 
 renamed to hadoop-mapreduce-examples-\*.jar.
 {code:xml}
 linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # 
 ./hadoop-validate-setup.sh
 find: `/usr/share/hadoop': No such file or directory
 Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
 '/usr/share/hadoop'
 linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283313#comment-13283313
 ] 

Hudson commented on HADOOP-8415:


Integrated in Hadoop-Hdfs-trunk #1056 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1056/])
HADOOP-8415. Add getDouble() and setDouble() in 
org.apache.hadoop.conf.Configuration. Contributed by Jan van der Lugt. (harsh) 
(Revision 1342501)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342501
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283315#comment-13283315
 ] 

Hudson commented on HADOOP-8323:


Integrated in Hadoop-Hdfs-trunk #1056 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1056/])
HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh) 
(Revision 1342514)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342514
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8422) Deprecate FileSystem#getDefault* and getServerDefault methods that don't take a Path argument

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283316#comment-13283316
 ] 

Hudson commented on HADOOP-8422:


Integrated in Hadoop-Hdfs-trunk #1056 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1056/])
HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault methods 
that don't take a Path argument. Contributed by Eli Collins (Revision 1342495)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342495
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java


 Deprecate FileSystem#getDefault* and getServerDefault methods that don't take 
 a Path argument 
 --

 Key: HADOOP-8422
 URL: https://issues.apache.org/jira/browse/HADOOP-8422
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.1-alpha

 Attachments: hadoop-8422.txt


 The javadocs for FileSystem#getDefaultBlockSize and 
 FileSystem#getDefaultReplication claim that The given path will be used to 
 locate the actual filesystem however they both ignore the path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283321#comment-13283321
 ] 

Thomas Graves commented on HADOOP-8368:
---

Sorry for the delay, I couldn't post this since Jiras was down.

I'm on a rhel5 box - 64 bit. We build with both 32 and 64 bit java because we 
want both 32 and 64 bit versions of the native stuff.  I'm currently using 
version java 1.6.0_22.

the Pipes stuff does now built.
However, now when I try to use 32 bit java to build it gives the following 
error:

 [exec] 
/8368-test/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:68:
 warning: ‘userListHead’ may be used uninitialized in this function [exec] 
Building C object 
CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/util/NativeCrc32.c.o
 [exec] [100%] Building C object 
CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/util/bulk_crc32.c.o 
[exec] 
/8368-test/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32.c:44:8:
 warning: extra tokens at end of #endif directive [exec] Linking C shared 
library libhadoop.so
 [exec] /java_jdk/java/jre/lib/i386/client/libjvm.so: could not read 
symbols: File in wrong format


I also see the libhadoop.a went away.  I'm not positive if any of our customers 
are using it but it is an incompatibility.  Perhaps other have comments on 
that. 


 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more 

[jira] [Commented] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-05-25 Thread Devaraj K (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283332#comment-13283332
 ] 

Devaraj K commented on HADOOP-8438:
---

{code:xml}
-1 tests included. The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this patch.
Also please list what manual steps were performed to verify this patch.
{code}
It doesn't need new tests since it is script change. I have verified it 
manually.


{code:xml}
-1 core tests. The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.fs.viewfs.TestViewFsTrash
{code}
This test failure is not related to the patch.



 hadoop-validate-setup.sh refers to examples jar file which doesn't exist
 

 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.1-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K
 Attachments: HADOOP-8438.patch


 hadoop-validate-setup.sh is trying to find the file with the name 
 hadoop-examples-\*.jar and it is failing to find because the examples jar is 
 renamed to hadoop-mapreduce-examples-\*.jar.
 {code:xml}
 linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # 
 ./hadoop-validate-setup.sh
 find: `/usr/share/hadoop': No such file or directory
 Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
 '/usr/share/hadoop'
 linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8358) Config-related WARN for dfs.web.ugi can be avoided.

2012-05-25 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283341#comment-13283341
 ] 

Harsh J commented on HADOOP-8358:
-

Given this patch's trivialness (simple, non-breaking, 
should-have-done-way-before, change of old prop reliance across projects) I'll 
commit this in by Monday EOD if no one has any further objections.

 Config-related WARN for dfs.web.ugi can be avoided.
 ---

 Key: HADOOP-8358
 URL: https://issues.apache.org/jira/browse/HADOOP-8358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Trivial
 Attachments: HADOOP-8358.patch


 {code}
 2012-05-04 11:55:13,367 WARN org.apache.hadoop.http.lib.StaticUserWebFilter: 
 dfs.web.ugi should not be used. Instead, use hadoop.http.staticuser.user.
 {code}
 Looks easy to fix, and we should avoid using old config params that we 
 ourselves deprecated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7659:


Issue Type: Improvement  (was: Bug)

In hindsight, isn't a bug but rather an improvement (i.e. we can document what 
to expect)

 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283348#comment-13283348
 ] 

Harsh J commented on HADOOP-7659:
-

bq. -1 javadoc. The javadoc tool appears to have generated 5 warning messages.

Was probably something else in trunk at the time. See command log below for 
{{mvn javadoc:javadoc}}, which I made sure to do again now before committing:

{code}

➜  trunk  svn diff
Index: hadoop-common-project/hadoop-common/CHANGES.txt
===
--- hadoop-common-project/hadoop-common/CHANGES.txt (revision 1342586)
+++ hadoop-common-project/hadoop-common/CHANGES.txt (working copy)
@@ -76,6 +76,9 @@
 HADOOP-8415. Add getDouble() and setDouble() in
 org.apache.hadoop.conf.Configuration (Jan van der Lugt via harsh)
 
+HADOOP-7659. fs -getmerge isn't guaranteed to work well over non-HDFS
+filesystems (harsh)
+
   BUG FIXES
 
 HADOOP-8177. MBeans shouldn't try to register when it fails to create 
MBeanName.
Index: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
===
--- 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java(revision
 1342586)
+++ 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java(working
 copy)
@@ -307,6 +307,12 @@
 return FileUtil.fullyDelete(f);
   }
  
+  /**
+   * {@inheritDoc}
+   *
+   * (bNote/b: Returned list is not sorted in any given order,
+   * due to reliance on Java's {@link File#list()} API.)
+   */
   public FileStatus[] listStatus(Path f) throws IOException {
 File localf = pathToFile(f);
 FileStatus[] results;
➜  trunk  cd hadoop-common-project/hadoop-common 
➜  hadoop-common  mvn javadoc:javadoc
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Common 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO]  maven-javadoc-plugin:2.8.1:javadoc (default-cli) @ hadoop-common 
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-source (add-source) @ 
hadoop-common ---
[INFO] Source directory: 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/generated-sources/java
 added.
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-test-source (add-test-source) @ 
hadoop-common ---
[INFO] Test Source directory: 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/generated-test-sources/java
 added.
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (compile-proto) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (save-version) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-common ---
[INFO] Skipped writing classpath file 
'/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/classes/mrapp-generated-classpath'.
  No changes found.
[INFO] 
[INFO]  maven-javadoc-plugin:2.8.1:javadoc (default-cli) @ hadoop-common 
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:javadoc (default-cli) @ hadoop-common ---
[INFO] 
ExcludePrivateAnnotationsStandardDoclet
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 23.042s
[INFO] Finished at: Fri May 25 18:14:53 GMT+05:30 2012
[INFO] Final Memory: 11M/81M
[INFO] 
{code}

 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 

[jira] [Updated] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-7659:


  Resolution: Fixed
   Fix Version/s: 3.0.0
Target Version/s:   (was: 3.0.0)
Release Note: Documented that the fs -getmerge shell command may not 
work properly over non HDFS-filesystem implementations due to platform-varying 
file list ordering.
  Status: Resolved  (was: Patch Available)

Committed revision 1342600 to trunk.

 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8439) Update hadoop-setup-conf.sh to support yarn configurations

2012-05-25 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-8439:
-

 Summary: Update hadoop-setup-conf.sh to support yarn configurations
 Key: HADOOP-8439
 URL: https://issues.apache.org/jira/browse/HADOOP-8439
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K


At present hadoop-setup-conf.sh refers the classic mapred properties. It can be 
updated to support yarn configurations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8268:


Attachment: HADOOP-8628.patch

Patch had a few issues: New lines were Windows-style (trailing CRs in it), and 
wasn't -p0 applicable (although not an issue these days with test-patch, as it 
tries upto three levels).

Here's a more compatible svn patch. Lets hit QA Bot with this.

Locally mvn 3.x passes an install run (without tests) for me. So am +1. Will 
commit this in by monday regardless of QA (would be good to have it run though) 
unless someone else objects. The same kinda lines is already used in HBase, and 
thats another basis for my +1 here.

Thanks Radim.

(Here it goes…)

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven, patch
 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, hadoop-pom.txt, 
 hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8268:


Attachment: (was: HADOOP-8628.patch)

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven, patch
 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, hadoop-pom.txt, 
 hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8268) A few pom.xml across Hadoop project may fail XML validation

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-8268:


Attachment: HADOOP-8268.patch

Fixed filename of patch.

 A few pom.xml across Hadoop project may fail XML validation
 ---

 Key: HADOOP-8268
 URL: https://issues.apache.org/jira/browse/HADOOP-8268
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: FreeBSD 8.2 / AMD64
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: maven, patch
 Attachments: HADOOP-8268.patch, HADOOP-8268.patch, hadoop-pom.txt, 
 hadoop-pom.txt, hadoop-pom.txt, poms-patch.txt, poms-patch.txt


 In a few pom files there are embedded ant commands which contains '' - 
 redirection. This makes XML file invalid and this POM file can not be 
 deployed into validating Maven repository managers such as Artifactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283400#comment-13283400
 ] 

Hudson commented on HADOOP-8415:


Integrated in Hadoop-Mapreduce-trunk #1090 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1090/])
HADOOP-8415. Add getDouble() and setDouble() in 
org.apache.hadoop.conf.Configuration. Contributed by Jan van der Lugt. (harsh) 
(Revision 1342501)

 Result = ABORTED
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342501
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283405#comment-13283405
 ] 

Hudson commented on HADOOP-7659:


Integrated in Hadoop-Mapreduce-trunk #1090 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1090/])
HADOOP-7659. fs -getmerge isn't guaranteed to work well over non-HDFS 
filesystems (harsh) (Revision 1342600)

 Result = ABORTED
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342600
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283402#comment-13283402
 ] 

Hudson commented on HADOOP-8323:


Integrated in Hadoop-Mapreduce-trunk #1090 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1090/])
HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh) 
(Revision 1342514)

 Result = ABORTED
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342514
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8422) Deprecate FileSystem#getDefault* and getServerDefault methods that don't take a Path argument

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283403#comment-13283403
 ] 

Hudson commented on HADOOP-8422:


Integrated in Hadoop-Mapreduce-trunk #1090 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1090/])
HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault methods 
that don't take a Path argument. Contributed by Eli Collins (Revision 1342495)

 Result = ABORTED
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342495
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java


 Deprecate FileSystem#getDefault* and getServerDefault methods that don't take 
 a Path argument 
 --

 Key: HADOOP-8422
 URL: https://issues.apache.org/jira/browse/HADOOP-8422
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.1-alpha

 Attachments: hadoop-8422.txt


 The javadocs for FileSystem#getDefaultBlockSize and 
 FileSystem#getDefaultReplication claim that The given path will be used to 
 locate the actual filesystem however they both ignore the path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too

2012-05-25 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283422#comment-13283422
 ] 

Harsh J commented on HADOOP-6871:
-

{quote}
-1 core tests. The patch failed these unit tests:
org.apache.hadoop.ha.TestHealthMonitor
{quote}

This was perhaps an issue on trunk back then. Passes for me locally:

{code}
➜  hadoop-common  svn diff  
Index: src/test/java/org/apache/hadoop/conf/TestConfiguration.java
===
--- src/test/java/org/apache/hadoop/conf/TestConfiguration.java (revision 
1342610)
+++ src/test/java/org/apache/hadoop/conf/TestConfiguration.java (working copy)
@@ -999,6 +999,15 @@
 Not returning expected number of classes. Number of returned classes 
=
 + classes.length, 0, classes.length);
   }
+
+  public void testInvalidSubstitutation() {
+String key = test.random.key;
+String keyExpression = ${ + key + };
+Configuration configuration = new Configuration();
+configuration.set(key, keyExpression);
+String value = configuration.get(key);
+assertTrue(Unexpected value  + value, value.equals(keyExpression));
+  }
   
   public static void main(String[] argv) throws Exception {
 junit.textui.TestRunner.main(new String[]{
Index: src/main/java/org/apache/hadoop/conf/Configuration.java
===
--- src/main/java/org/apache/hadoop/conf/Configuration.java (revision 
1342610)
+++ src/main/java/org/apache/hadoop/conf/Configuration.java (working copy)
@@ -617,7 +617,13 @@
 }
 Matcher match = varPat.matcher();
 String eval = expr;
+SetString evalSet = new HashSetString();
 for(int s=0; sMAX_SUBST; s++) {
+  if (evalSet.contains(eval)) {
+// Cyclic resolution pattern detected. Return current expression.
+return eval;
+  }
+  evalSet.add(eval);
   match.reset(eval);
   if (!match.find()) {
 return eval;
➜  hadoop-common  mvn clean install -Dtest=TestConfiguration,TestHealthMonitor
[INFO] Scanning for projects...
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Common 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-common ---
[INFO] Deleting 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-common ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/test-dir
[mkdir] Created dir: 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/test/data
[INFO] Executed tasks
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-source (add-source) @ 
hadoop-common ---
[INFO] Source directory: 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/generated-sources/java
 added.
[INFO] 
[INFO] --- build-helper-maven-plugin:1.5:add-test-source (add-test-source) @ 
hadoop-common ---
[INFO] Test Source directory: 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/generated-test-sources/java
 added.
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (compile-proto) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (save-version) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-common ---
[INFO] Wrote classpath file 
'/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/classes/mrapp-generated-classpath'.
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-common ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ 
hadoop-common ---
[INFO] Compiling 629 source files to 
/Users/harshchouraria/Work/code/apache/root-hadoop/trunk/hadoop-common-project/hadoop-common/target/classes
[INFO] 
[INFO] --- avro-maven-plugin:1.5.3:schema (generate-avro-test-sources) @ 
hadoop-common ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (compile-test-proto) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (generate-test-sources) @ hadoop-common 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- 

[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283426#comment-13283426
 ] 

Hudson commented on HADOOP-8415:


Integrated in Hadoop-Hdfs-trunk-Commit #2361 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2361/])
HADOOP-8415. Add getDouble() and setDouble() in 
org.apache.hadoop.conf.Configuration. Contributed by Jan van der Lugt. (harsh) 
(Revision 1342501)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342501
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283429#comment-13283429
 ] 

Hudson commented on HADOOP-8323:


Integrated in Hadoop-Hdfs-trunk-Commit #2361 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2361/])
HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh) 
(Revision 1342514)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342514
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8422) Deprecate FileSystem#getDefault* and getServerDefault methods that don't take a Path argument

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283430#comment-13283430
 ] 

Hudson commented on HADOOP-8422:


Integrated in Hadoop-Hdfs-trunk-Commit #2361 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2361/])
HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault methods 
that don't take a Path argument. Contributed by Eli Collins (Revision 1342495)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342495
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java


 Deprecate FileSystem#getDefault* and getServerDefault methods that don't take 
 a Path argument 
 --

 Key: HADOOP-8422
 URL: https://issues.apache.org/jira/browse/HADOOP-8422
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.1-alpha

 Attachments: hadoop-8422.txt


 The javadocs for FileSystem#getDefaultBlockSize and 
 FileSystem#getDefaultReplication claim that The given path will be used to 
 locate the actual filesystem however they both ignore the path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283432#comment-13283432
 ] 

Hudson commented on HADOOP-7659:


Integrated in Hadoop-Hdfs-trunk-Commit #2361 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2361/])
HADOOP-7659. fs -getmerge isn't guaranteed to work well over non-HDFS 
filesystems (harsh) (Revision 1342600)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342600
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283438#comment-13283438
 ] 

Hudson commented on HADOOP-8415:


Integrated in Hadoop-Common-trunk-Commit #2288 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2288/])
HADOOP-8415. Add getDouble() and setDouble() in 
org.apache.hadoop.conf.Configuration. Contributed by Jan van der Lugt. (harsh) 
(Revision 1342501)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342501
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283441#comment-13283441
 ] 

Hudson commented on HADOOP-8323:


Integrated in Hadoop-Common-trunk-Commit #2288 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2288/])
HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh) 
(Revision 1342514)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342514
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283444#comment-13283444
 ] 

Hudson commented on HADOOP-7659:


Integrated in Hadoop-Common-trunk-Commit #2288 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2288/])
HADOOP-7659. fs -getmerge isn't guaranteed to work well over non-HDFS 
filesystems (harsh) (Revision 1342600)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342600
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8422) Deprecate FileSystem#getDefault* and getServerDefault methods that don't take a Path argument

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283442#comment-13283442
 ] 

Hudson commented on HADOOP-8422:


Integrated in Hadoop-Common-trunk-Commit #2288 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2288/])
HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault methods 
that don't take a Path argument. Contributed by Eli Collins (Revision 1342495)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342495
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java


 Deprecate FileSystem#getDefault* and getServerDefault methods that don't take 
 a Path argument 
 --

 Key: HADOOP-8422
 URL: https://issues.apache.org/jira/browse/HADOOP-8422
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.1-alpha

 Attachments: hadoop-8422.txt


 The javadocs for FileSystem#getDefaultBlockSize and 
 FileSystem#getDefaultReplication claim that The given path will be used to 
 locate the actual filesystem however they both ignore the path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too la

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-6871:


 Target Version/s: 3.0.0
Affects Version/s: 3.0.0
 Hadoop Flags: Reviewed

 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0
Reporter: Arvind Prabhakar
 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too la

2012-05-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-6871:


  Resolution: Fixed
   Fix Version/s: 3.0.0
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

Committed revision 1342626 to trunk. Thanks for your contribution Arvind! 
Thanks for the review as well Ahmed!

 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0
Reporter: Arvind Prabhakar
 Fix For: 3.0.0

 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283459#comment-13283459
 ] 

Hudson commented on HADOOP-6871:


Integrated in Hadoop-Hdfs-trunk-Commit #2362 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2362/])
HADOOP-6871. When the value of a configuration key is set to its unresolved 
form, it causes an IllegalStateException in Configuration.get() stating that 
substitution depth is too large. Contributed by Arvind Prabhakar (harsh) 
(Revision 1342626)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342626
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0
Reporter: Arvind Prabhakar
 Fix For: 3.0.0

 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283475#comment-13283475
 ] 

Hudson commented on HADOOP-6871:


Integrated in Hadoop-Common-trunk-Commit #2289 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2289/])
HADOOP-6871. When the value of a configuration key is set to its unresolved 
form, it causes an IllegalStateException in Configuration.get() stating that 
substitution depth is too large. Contributed by Arvind Prabhakar (harsh) 
(Revision 1342626)

 Result = SUCCESS
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342626
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0
Reporter: Arvind Prabhakar
 Fix For: 3.0.0

 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283498#comment-13283498
 ] 

Hudson commented on HADOOP-8415:


Integrated in Hadoop-Mapreduce-trunk-Commit #2307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2307/])
HADOOP-8415. Add getDouble() and setDouble() in 
org.apache.hadoop.conf.Configuration. Contributed by Jan van der Lugt. (harsh) 
(Revision 1342501)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342501
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8323) Revert HADOOP-7940 and improve javadocs and test for Text.clear()

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283501#comment-13283501
 ] 

Hudson commented on HADOOP-8323:


Integrated in Hadoop-Mapreduce-trunk-Commit #2307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2307/])
HADOOP-8323. Add javadoc and tests for Text.clear() behavior (harsh) 
(Revision 1342514)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342514
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


 Revert HADOOP-7940 and improve javadocs and test for Text.clear()
 -

 Key: HADOOP-8323
 URL: https://issues.apache.org/jira/browse/HADOOP-8323
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Harsh J
Assignee: Harsh J
Priority: Critical
  Labels: performance
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8323.patch, HADOOP-8323.patch, HADOOP-8323.patch


 Per [~jdonofrio]'s comments on HADOOP-7940, we should revert it as it has 
 caused a performance regression (for scenarios where Text is reused, popular 
 in MR).
 The clear() works as intended, as the API also offers a current length API.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8422) Deprecate FileSystem#getDefault* and getServerDefault methods that don't take a Path argument

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283502#comment-13283502
 ] 

Hudson commented on HADOOP-8422:


Integrated in Hadoop-Mapreduce-trunk-Commit #2307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2307/])
HADOOP-8422. Deprecate FileSystem#getDefault* and getServerDefault methods 
that don't take a Path argument. Contributed by Eli Collins (Revision 1342495)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342495
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemTestHelper.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3/S3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSPermission.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/security/TestPermission.java


 Deprecate FileSystem#getDefault* and getServerDefault methods that don't take 
 a Path argument 
 --

 Key: HADOOP-8422
 URL: https://issues.apache.org/jira/browse/HADOOP-8422
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.1-alpha

 Attachments: hadoop-8422.txt


 The javadocs for FileSystem#getDefaultBlockSize and 
 FileSystem#getDefaultReplication claim that The given path will be used to 
 locate the actual filesystem however they both ignore the path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7659) fs -getmerge isn't guaranteed to work well over non-HDFS filesystems

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283504#comment-13283504
 ] 

Hudson commented on HADOOP-7659:


Integrated in Hadoop-Mapreduce-trunk-Commit #2307 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2307/])
HADOOP-7659. fs -getmerge isn't guaranteed to work well over non-HDFS 
filesystems (harsh) (Revision 1342600)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342600
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java


 fs -getmerge isn't guaranteed to work well over non-HDFS filesystems
 

 Key: HADOOP-7659
 URL: https://issues.apache.org/jira/browse/HADOOP-7659
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.20.204.0
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7659.patch


 When you use {{fs -getmerge}} with HDFS, you are guaranteed file list sorting 
 (part-0, part-1, onwards). When you use the same with other FSes we 
 bundle, the ordering of listing is not guaranteed at all. This is cause of 
 http://download.oracle.com/javase/6/docs/api/java/io/File.html#list() which 
 we use internally for native file listing.
 This should either be documented as a known issue on -getmerge help 
 pages/mans, or a consistent ordering (similar to HDFS) must be applied atop 
 the listing. I suspect the latter only makes it worthy for what we include - 
 while other FSes out there still have to deal with this issue. Perhaps we 
 need a recommendation doc note added to our API?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too

2012-05-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283553#comment-13283553
 ] 

Hudson commented on HADOOP-6871:


Integrated in Hadoop-Mapreduce-trunk-Commit #2308 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2308/])
HADOOP-6871. When the value of a configuration key is set to its unresolved 
form, it causes an IllegalStateException in Configuration.get() stating that 
substitution depth is too large. Contributed by Arvind Prabhakar (harsh) 
(Revision 1342626)

 Result = FAILURE
harsh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1342626
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0
Reporter: Arvind Prabhakar
 Fix For: 3.0.0

 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8368:
-

Hadoop Flags: Incompatible change

Marking this as an incompatible change since it breaks the building of the 
native code on platforms where it currently worked out of the box.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Jan van der Lugt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283597#comment-13283597
 ] 

Jan van der Lugt commented on HADOOP-8415:
--

Hurray! I'll file a separate JIRA for the set-functions on monday. Thanks for 
your help, Harsh!

 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Hadoop Flags:   (was: Incompatible change)

This isn't intended to be an incompatible change.  I will look into the 32-bit 
JVM issue.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8415) getDouble() and setDouble() in org.apache.hadoop.conf.Configuration

2012-05-25 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283676#comment-13283676
 ] 

Harsh J commented on HADOOP-8415:
-

I've already filed HADOOP-8434 for the setter methods.

 getDouble() and setDouble() in org.apache.hadoop.conf.Configuration
 ---

 Key: HADOOP-8415
 URL: https://issues.apache.org/jira/browse/HADOOP-8415
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 1.0.2
Reporter: Jan van der Lugt
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-8415.patch

   Original Estimate: 0.25h
  Remaining Estimate: 0.25h

 In the org.apache.hadoop.conf.Configuration class, methods exist to set 
 Integers, Longs, Booleans, Floats and Strings, but methods for Doubles are 
 absent. Are they not there for a reason or should they be added? In the 
 latter case, the attached patch contains the missing functions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283682#comment-13283682
 ] 

Allen Wittenauer commented on HADOOP-8368:
--

Other platforms now require cmake to be installed whereas before they didn't.  
That's an incompatible change in my book.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8368:
-

Hadoop Flags: Incompatible change

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7426) User Guide for how to use viewfs with federation

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7426:


Component/s: (was: documentation)
 viewfs

 User Guide for how to use viewfs with federation
 

 Key: HADOOP-7426
 URL: https://issues.apache.org/jira/browse/HADOOP-7426
 Project: Hadoop Common
  Issue Type: Improvement
  Components: viewfs
Affects Versions: 2.0.0-alpha
Reporter: Sanjay Radia
Assignee: Sanjay Radia
Priority: Minor
 Attachments: Viewfs Guide.pdf, c7426_20111214.patch, 
 c7426_20111215.patch, c7426_20111215b.patch, c7426_20111218.patch, 
 c7426_20111220.patch, c7426_20111220_site.tar.gz, viewfs_TypicalMountTable.png




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8349) ViewFS doesn't work when the root of a file system is mounted

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8349:


Component/s: (was: fs)
 viewfs

 ViewFS doesn't work when the root of a file system is mounted
 -

 Key: HADOOP-8349
 URL: https://issues.apache.org/jira/browse/HADOOP-8349
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.0-alpha

 Attachments: HADOOP-8349.patch, HADOOP-8349.patch, HADOOP-8349.patch


 Viewing files under a ViewFS mount which mounts the root of a file system 
 shows trimmed paths. Trying to perform operations on files or directories 
 under the root-mounted file system doesn't work. More info in the first 
 comment of this JIRA.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8299) ViewFs doesn't work with a slash mount point

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8299:


Component/s: (was: fs)
 viewfs

 ViewFs doesn't work with a slash mount point
 

 Key: HADOOP-8299
 URL: https://issues.apache.org/jira/browse/HADOOP-8299
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins

 We currently assume [a typical viewfs client 
 configuration|https://issues.apache.org/jira/secure/attachment/12507504/viewfs_TypicalMountTable.png]
  is a set of non-overlapping mounts. This means every time you want to add a 
 new top-level directory you need to update the client-side mountable config. 
 If users could specify a slash mount, and then add additional mounts as 
 necessary they could add a new top-level directory without updating all 
 client configs (as long as the new top-level directory was being created on 
 the NN the slash mount points to). This could be achieved by HADOOP-8298 
 (merge mounts, since we're effectively merging all new mount points with 
 slash) or having the notion of a default NN for a mount table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7951) viewfs fails unless all mount points are available

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7951:


Component/s: (was: security)
 (was: fs)
 viewfs

 viewfs fails unless all mount points are available
 --

 Key: HADOOP-7951
 URL: https://issues.apache.org/jira/browse/HADOOP-7951
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Priority: Critical

 Obtaining a delegation token via viewfs will attempt to acquire tokens from 
 all filesystems in the mount table.  All clients that obtain tokens, 
 including job submissions, will fail if any of the mount points are 
 unavailable -- even if paths in the unavailable mount will not be accessed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7933) Viewfs changes for MAPREDUCE-3529

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7933:


Component/s: viewfs

 Viewfs changes for MAPREDUCE-3529
 -

 Key: HADOOP-7933
 URL: https://issues.apache.org/jira/browse/HADOOP-7933
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0
Reporter: Siddharth Seth
Assignee: Siddharth Seth
Priority: Critical
 Fix For: 0.23.1

 Attachments: HADOOP-7933.txt, HADOOP7933_v1.txt, HADOOP7933_v2.txt, 
 HDFS2665_v1.txt, HDFS2665_v1.txt


 ViewFs.getDelegationTokens returns a list of tokens for the associated 
 namenodes. Credentials serializes these tokens using the service name for the 
 actual namenodes. Effectively, tokens are not cached for viewfs (some more 
 details in MR 3529). Affects any job which uses the TokenCache in tasks along 
 with viewfs (some Pig jobs).
 Talk to Jitendra about this, some options
 1. Change Credentials.getAllTokens to return the key, instead of just a token 
 list (associate the viewfs canonical name with a token in credentials)
 2. Have viewfs issue a fake token.
 Both of these would allow for a single viewfs configuration only.
 3. An additional API in FileSystem - something like 
 getDelegationTokens(String renewer, Credentials credentials) - which would 
 check the credentials object before making token requests to the actual 
 namenode.
 4. An additional API in FileSystem - getCanonicalServiceNames - similar to 
 getDelegationTokens, which would return service names for the actual 
 namenodes. TokenCache/Credentials can work using this list.
 5. have getDelegationTokens check the current UGI - and fetch tokens only if 
 they don't exist.
 Have a quick patch for 3, along with associated MR changes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7770) ViewFS getFileChecksum throws FileNotFoundException for files in /tmp and /user

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7770:


Component/s: (was: fs)
 viewfs

 ViewFS getFileChecksum throws FileNotFoundException for files in /tmp and 
 /user
 ---

 Key: HADOOP-7770
 URL: https://issues.apache.org/jira/browse/HADOOP-7770
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.0
Reporter: Ravi Prakash
Assignee: Ravi Prakash
Priority: Blocker
 Fix For: 0.23.0

 Attachments: HADOOP-7770.patch, HADOOP-7770.patch, HADOOP-7770.patch, 
 HADOOP-7770.patch, HADOOP-7770.unitTest.patch


 Thanks to Rohini Palaniswamy for discovering this bug. To quote
 bq. When doing getFileChecksum for path /user/hadoopqa/somefile, it is trying 
 to fetch checksum for /user/user/hadoopqa/somefile. If /tmp/file, it is 
 trying /tmp/tmp/file. Works fine for other FS operations.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8194) viewfs: quota command does not report remaining quotas

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8194:


Component/s: viewfs

 viewfs: quota command does not report remaining quotas
 --

 Key: HADOOP-8194
 URL: https://issues.apache.org/jira/browse/HADOOP-8194
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.2, 0.23.3, 0.24.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8194.patch, HADOOP-8194.patch


 The space and namesapce quotas and remaining are not reported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8408:


Component/s: (was: fs)
 viewfs

 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.1-alpha

 Attachments: HADOOP-8408-amendment.patch, 
 HADOOP-8408-amendment.patch, HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7950) trunk test failure at org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7950:


Component/s: viewfs

 trunk test failure at org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
 

 Key: HADOOP-7950
 URL: https://issues.apache.org/jira/browse/HADOOP-7950
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Reporter: Sho Shimauchi
Priority: Critical

 I ran mvn test -Dtest=TestViewFileSystemHdfs on trunk and failed with the 
 following error:
 {code}
 Failed tests:   
 testGetDelegationTokensWithCredentials(org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs):
  expected:0 but was:1
 {code}
 This test was added in HADOOP-7933 .

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8298) ViewFs merge mounts

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8298:


Component/s: (was: fs)
 viewfs

 ViewFs merge mounts 
 

 Key: HADOOP-8298
 URL: https://issues.apache.org/jira/browse/HADOOP-8298
 Project: Hadoop Common
  Issue Type: New Feature
  Components: viewfs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins

 A merge mount is a single mount represented by the union of two namespaces. 
 See the viewfs docs (HADOOP-7426) and [ViewFs 
 javadoc|http://hadoop.apache.org/common/docs/r0.23.0/api/org/apache/hadoop/fs/viewfs/ViewFs.html]
  for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7953) Viewfs needs documentation

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7953:


Component/s: (was: documentation)
 (was: fs)
 viewfs

 Viewfs needs documentation
 --

 Key: HADOOP-7953
 URL: https://issues.apache.org/jira/browse/HADOOP-7953
 Project: Hadoop Common
  Issue Type: Task
  Components: viewfs
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Sanjay Radia

 Currently the only documentation on how to use viewfs lives in the javadoc. 
 Let's add some basic documentation under common (or perhaps federation since 
 that's the context). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7284) Trash and shell's rm does not work for viewfs

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-7284:


Component/s: viewfs

 Trash and shell's rm does not work for viewfs
 -

 Key: HADOOP-7284
 URL: https://issues.apache.org/jira/browse/HADOOP-7284
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.0

 Attachments: trash1.patch, trash10.patch, trash11.patch, 
 trash2.patch, trash3.patch, trash4.patch, trash5.patch, trash6.patch, 
 trash7.patch, trash8.patch, trash9.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8305) distcp over viewfs is broken

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8305:


Component/s: viewfs

 distcp over viewfs is broken
 

 Key: HADOOP-8305
 URL: https://issues.apache.org/jira/browse/HADOOP-8305
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: John George
Assignee: John George
 Fix For: 0.23.3, 2.0.0-alpha, 3.0.0

 Attachments: HADOOP-8305.patch, HADOOP-8305.patch


 This is similar to MAPREDUCE-4133. distcp over viewfs is broken because 
 getDefaultReplication/BlockSize are being requested with no arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283690#comment-13283690
 ] 

Alejandro Abdelnur commented on HADOOP-8368:


we are talking about build environment requirement changes, from autoconf to 
cmake, this does not affect the end user. AFAIK we don't flag this kind of 
things as incompatible changes. We didn't do it when introducing Maven or 
protoc.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368.021.trimmed.patch

* Build both shared AND static versions of libhadoop and libhdfs (thanks for 
pointing this out, Thomas)

Still looking at 32-bit issues...

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283694#comment-13283694
 ] 

Thomas Graves commented on HADOOP-8368:
---

The incompatibility that I would flag is removing the .a files.  Or I would 
suggest putting them back unless there is reasoning to remove.

I agree that I don't think the build environment qualifies it as an 
incompatible change.  It would be nice to document though - perhaps in the 
twikis and BUILDING.txt. It is fairly obvious when the build fails due to cmake 
missing.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283696#comment-13283696
 ] 

Thomas Graves commented on HADOOP-8368:
---

oops, looks like I should have refreshed, ignore my comment about 
incompatibility with removing .a since Colin already addressed.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8357) Restore security in Hadoop 0.22 branch

2012-05-25 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283705#comment-13283705
 ] 

Konstantin Shvachko commented on HADOOP-8357:
-

Sounds like a pretty comprehensive test plan to me.
I see that impersonation tests cover the Oozie case scenarios. Did you test 
DistCp with hftp and without? I believe the WebUI cases should cover that, but 
worth asking. Please comment if you did try it.

The benchmarks look really good. I remember seeing similar numbers when 
security was first tested in then 0.20 branch.

Good job fixing findbugs. I agree the remaining few are just the specific use 
cases.

I am +1 on the changes overall and will be glad to start committing soon if 
there are no objections.

 Restore security in Hadoop 0.22 branch
 --

 Key: HADOOP-8357
 URL: https://issues.apache.org/jira/browse/HADOOP-8357
 Project: Hadoop Common
  Issue Type: Task
  Components: security
Affects Versions: 0.22.0
Reporter: Konstantin Shvachko
Assignee: Benoy Antony
 Attachments: SecurityTestPlan_results.pdf, 
 performance_22_vs_22sec.pdf, performance_22_vs_22sec_vs_22secon.pdf, 
 test_patch_results


 This is to track changes for restoring security in 0.22 branch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283707#comment-13283707
 ] 

Hadoop QA commented on HADOOP-8368:
---

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12529772/HADOOP-8368.021.trimmed.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1037//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1037//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, 

[jira] [Updated] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-05-25 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8278:
---

Attachment: HADOOP-8278.patch

Tom, great job. I've rebased it to trunk (it was not applying cleanly) and I 
did a few tweaks to get a deployed cluster to work (added jersey-json back to 
hadoop-commons, it seems it is pulled via reflection and dependency:analyze 
misses that. Also set hadoop-auth as compile in httpfs).

I'd suggest that for the dependencies that dependency:analyze misses we should 
add comments in the POMs.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
 Attachments: HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-05-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283713#comment-13283713
 ] 

Alejandro Abdelnur commented on HADOOP-8278:


forgot to mention, we should push this one one and then do MR as a separate 
JIRA.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
 Attachments: HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-5464) DFSClient does not treat write timeout of 0 properly

2012-05-25 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-5464:
---

Attachment: HADOOP-5464.branch1.patch

Backport the fix to branch 1.1


 DFSClient does not treat write timeout of 0 properly
 

 Key: HADOOP-5464
 URL: https://issues.apache.org/jira/browse/HADOOP-5464
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.17.0
Reporter: Raghu Angadi
Assignee: Raghu Angadi
 Fix For: 0.21.0

 Attachments: HADOOP-5464.branch1.patch, HADOOP-5464.patch


 {{dfs.datanode.socket.write.timeout}} is used for sockets to and from 
 datanodes. It is 8 minutes by default. Some users set this to 0, effectively 
 disabling the write timeout (for some specific reasons). 
 When this is set to 0, DFSClient sets the timeout to 5 seconds by mistake 
 while writing to DataNodes. This is exactly the opposite of real intention of 
 setting it to 0 since 5 seconds is too short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8414) Address problems related to localhost resolving to 127.0.0.1 on Windows

2012-05-25 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283733#comment-13283733
 ] 

Ivan Mitic commented on HADOOP-8414:


Thanks for the pointers Daryn! After looking more into this, I came to the 
following:

InetAddress.getByName(127.0.0.1).getHostName() == localhost (on Linux)
InetAddress.getByName(127.0.0.1).getHostName() == 127.0.0.1 (on Windows)

Going further, namenode's default filesystem URI is created using the above 
host name, and this URI is later used for DFS path resolution.


From looking at the documentation for InetAddress.getHostName(), it is 
supposed to perform a reverse DNS lookup of the address. For some reason, this 
does not work well for 127.0.0.1 in Java under Windows. I tried a few other IP 
addresses, and getByName(IP).getHostName() worked fine.

Do you maybe have other suggestions/alternatives we can use to address the 
problem?

 Address problems related to localhost resolving to 127.0.0.1 on Windows
 ---

 Key: HADOOP-8414
 URL: https://issues.apache.org/jira/browse/HADOOP-8414
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 1.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8414-branch-1-win.patch, 
 HADOOP-8414-branch-1-win.patch


 Localhost resolves to 127.0.0.1 on Windows and that causes the following 
 tests to fail:
  - TestHarFileSystem
  - TestCLI
  - TestSaslRPC
 This Jira tracks fixing these tests and other possible places that have 
 similar issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers

2012-05-25 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8440:
--

 Summary: HarFileSystem.decodeHarURI fails for URIs whose host 
contains numbers
 Key: HADOOP-8440
 URL: https://issues.apache.org/jira/browse/HADOOP-8440
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor


For example, HarFileSystem.decodeHarURI will fail for the following URI:

har://hdfs-127.0.0.1:51040/user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8440) HarFileSystem.decodeHarURI fails for URIs whose host contains numbers

2012-05-25 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8440:
---

Attachment: HADOOP-8440-branch-1-win.patch

Attaching patch.

Patch was originally part of the fix for HADOOP-8414 but was separated into its 
own Jira as it can generally stand alone. 


 HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
 -

 Key: HADOOP-8440
 URL: https://issues.apache.org/jira/browse/HADOOP-8440
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HADOOP-8440-branch-1-win.patch


 For example, HarFileSystem.decodeHarURI will fail for the following URI:
 har://hdfs-127.0.0.1:51040/user

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8411) TestStorageDirecotyFailure, TestTaskLogsTruncater, TestWebHdfsUrl and TestSecurityUtil fail on Windows

2012-05-25 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283748#comment-13283748
 ] 

Ivan Mitic commented on HADOOP-8411:


Daryn, Bikas, have I addressed all of your concerns in the latest patch?

 TestStorageDirecotyFailure, TestTaskLogsTruncater, TestWebHdfsUrl and 
 TestSecurityUtil fail on Windows
 --

 Key: HADOOP-8411
 URL: https://issues.apache.org/jira/browse/HADOOP-8411
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 1.1.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8411-branch-1-win.patch, 
 HADOOP-8411-branch-1-win.patch, HADOOP-8411-branch-1-win.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 Jira tracking failures from the summary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8278) Make sure components declare correct set of dependencies

2012-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283761#comment-13283761
 ] 

Hadoop QA commented on HADOOP-8278:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12529778/HADOOP-8278.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified test 
files.

-1 javac.  The applied patch generated 1999 javac compiler warnings (more 
than the trunk's current 1996 warnings).

-1 javadoc.  The javadoc tool appears to have generated 19 warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-annotations hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-httpfs 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal 
hadoop-mapreduce-project/hadoop-mapreduce-examples:

  org.apache.hadoop.http.TestHttpServer
  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1038//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1038//artifact/trunk/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1038//console

This message is automatically generated.

 Make sure components declare correct set of dependencies
 

 Key: HADOOP-8278
 URL: https://issues.apache.org/jira/browse/HADOOP-8278
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tom White
 Attachments: HADOOP-8278.patch, HADOOP-8278.patch


 As mentioned by Scott Carey in 
 https://issues.apache.org/jira/browse/MAPREDUCE-3378?focusedCommentId=13173437page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13173437,
  we need to make sure that components are declaring the correct set of 
 dependencies. In current trunk there are errors of omission and commission 
 (as reported by 'mvn dependency:analyze'):
 * Used undeclared dependencies - these are dependencies that are being met 
 transitively. They should be added explicitly as compile or provided 
 scope.
 * Unused declared dependencies - these are dependencies that are not needed 
 for compilation, although they may be needed at runtime. They certainly 
 should not be compile scope - they should either be removed or marked as 
 runtime or test scope.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8417) HADOOP-6963 didn't update hadoop-core-pom-template.xml

2012-05-25 Thread Zhihong Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihong Yu updated HADOOP-8417:
---

Attachment: hadoop-8417-v2.txt

Just saw Ravi's comment.

Patch v2 attached.

 HADOOP-6963 didn't update hadoop-core-pom-template.xml
 --

 Key: HADOOP-8417
 URL: https://issues.apache.org/jira/browse/HADOOP-8417
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3
Reporter: Zhihong Yu
Assignee: Zhihong Yu
 Attachments: hadoop-8417-v2.txt, hadoop-8417.txt


 HADOOP-6963 introduced commons-io 2.1 in ivy.xml but forgot to update the 
 hadoop-core-pom-template.xml.
 This has caused map reduce jobs in downstream projects to fail with:
 {code}
 Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.FileUtils
   at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
   ... 15 more
 {code}
 This caused a regression for 1.0.3 because downstream projects used to not 
 directly depend on commons-io

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8368:
-

Attachment: HADOOP-8368.023.trimmed.patch

* fix 32-bit compile

* only generate one copy of each binary or library (get rid of make install 
step)

* make sure that dual shared / static library build works correctly in all cases

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283872#comment-13283872
 ] 

Hadoop QA commented on HADOOP-8368:
---

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12529837/HADOOP-8368.023.trimmed.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1039//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1039//console

This message is automatically generated.

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa

[jira] [Updated] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8368:


Target Version/s: 2.0.1-alpha  (was: 2.0.0-alpha)

 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins reassigned HADOOP-8031:
---

Assignee: Elias Ross

Thanks for contributing Elias. Can you update TestConfiguration with a case 
that will fail w/o your patch?

I've rebased your patch on trunk.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Elias Ross
Assignee: Elias Ross
 Attachments: 0001-fix-HADOOP-7982-class-loader.patch


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8031:


Attachment: hadoop-8031.txt

Patch attached.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-05-25 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8031:


  Component/s: conf
 Target Version/s: 2.0.1-alpha
Affects Version/s: 2.0.0-alpha

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8031) Configuration class fails to find embedded .jar resources; should use URL.openStream()

2012-05-25 Thread Elias Ross (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13283897#comment-13283897
 ] 

Elias Ross commented on HADOOP-8031:


Eli,

Thanks for your response. I would like to reproduce the problem but I'd have to 
somehow embed the .xml file inside a .jar and adjust the test classpath to 
match. I'd likely have to isolate the test from the rest of the existing 
classpath as well. Maybe you could guide me through this?

Thanks.

 Configuration class fails to find embedded .jar resources; should use 
 URL.openStream()
 --

 Key: HADOOP-8031
 URL: https://issues.apache.org/jira/browse/HADOOP-8031
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Elias Ross
Assignee: Elias Ross
 Attachments: 0001-fix-HADOOP-7982-class-loader.patch, hadoop-8031.txt


 While running a hadoop client within RHQ (monitoring software) using its 
 classloader, I see this:
 2012-02-07 09:15:25,313 INFO  [ResourceContainer.invoker.daemon-2] 
 (org.apache.hadoop.conf.Configuration)- parsing 
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 2012-02-07 09:15:25,318 ERROR [InventoryManager.discovery-1] 
 (rhq.core.pc.inventory.InventoryManager)- Failed to start component for 
 Resource[id=16290, type=NameNode, key=NameNode:/usr/lib/hadoop-0.20, 
 name=NameNode, parent=vg61l01ad-hadoop002.apple.com] from synchronized merge.
 org.rhq.core.clientapi.agent.PluginContainerException: Failed to start 
 component for resource Resource[id=16290, type=NameNode, 
 key=NameNode:/usr/lib/hadoop-0.20, name=NameNode, 
 parent=vg61l01ad-hadoop002.apple.com].
 Caused by: java.lang.RuntimeException: core-site.xml not found
   at 
 org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1308)
   at 
 org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1228)
   at 
 org.apache.hadoop.conf.Configuration.getProps(Configuration.java:1169)
   at org.apache.hadoop.conf.Configuration.set(Configuration.java:438)
 This is because the URL
 jar:file:/usr/local/rhq-agent/data/tmp/rhq-hadoop-plugin-4.3.0-SNAPSHOT.jar6856622641102893436.classloader/hadoop-core-0.20.2+737+1.jar7204287718482036191.tmp!/core-default.xml
 cannot be found by DocumentBuilder (doesn't understand it). (Note: the logs 
 are for an old version of Configuration class, but the new version has the 
 same code.)
 The solution is to obtain the resource stream directly from the URL object 
 itself.
 That is to say:
 {code}
  URL url = getResource((String)name);
 -if (url != null) {
 -  if (!quiet) {
 -LOG.info(parsing  + url);
 -  }
 -  doc = builder.parse(url.toString());
 -}
 +doc = builder.parse(url.openStream());
 {code}
 Note: I have a full patch pending approval at Apple for this change, 
 including some cleanup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira