[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610562#comment-14610562
 ] 

Hudson commented on HADOOP-12124:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
HADOOP-12124. Add HTrace support for FsShell (cmccabe) (cmccabe: rev 
ad60807238c4f7779cb0685e7d39ca0c50e01b2f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java


 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.8.0

 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() should always return a sorted list of files

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610564#comment-14610564
 ] 

Hudson commented on HADOOP-10798:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
HADOOP-10798. globStatus() should always return a sorted list of files 
(cmccabe) (cmccabe: rev 68e588cbee660d55dba518892d064bee3795a002)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 globStatus() should always return a sorted list of files
 

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Assignee: Colin Patrick McCabe
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10798.001.patch


 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12149) copy all of test-patch BINDIR prior to re-exec

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610572#comment-14610572
 ] 

Hudson commented on HADOOP-12149:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
HADOOP-12149. copy all of test-patch BINDIR prior to re-exec (aw) (aw: rev 
147e020c7aef3ba42eddcef3be1b4ae7c7910371)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


 copy all of test-patch BINDIR prior to re-exec
 --

 Key: HADOOP-12149
 URL: https://issues.apache.org/jira/browse/HADOOP-12149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: 3.0.0, HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-12149.00.patch


 During some tests (e.g., 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7090 ), initial mvn 
 install triggered a full test suite run when Jenkins switches from old 
 test-patch to new test-patch.  This is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12164) Fix TestMove and TestFsShellReturnCode failed to get command name using reflection.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610570#comment-14610570
 ] 

Hudson commented on HADOOP-12164:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
 HADOOP-12164. Fix TestMove and TestFsShellReturnCode failed to get command 
name using reflection. (Lei Xu) (lei: rev 
532e38cb7f70606c2c96d05259670e1e91d60ab3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java


 Fix TestMove and TestFsShellReturnCode failed to get command name using 
 reflection.
 ---

 Key: HADOOP-12164
 URL: https://issues.apache.org/jira/browse/HADOOP-12164
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-12164.000.patch


 When enabled {{hadoop.shell.missing.defaultFs.warning}}, a few tests were 
 failed as following:
 {noformat}
 java.lang.RuntimeException: failed to get .NAME
   at java.lang.Class.getDeclaredField(Class.java:1948)
   at org.apache.hadoop.fs.shell.Command.getCommandField(Command.java:458)
   at org.apache.hadoop.fs.shell.Command.getName(Command.java:401)
   at 
 org.apache.hadoop.fs.shell.FsCommand.getCommandName(FsCommand.java:80)
   at 
 org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:111)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at 
 org.apache.hadoop.fs.TestFsShellReturnCode.testChgrpGroupValidity(TestFsShellReturnCode.java:434)
 {noformat}
 The reason is that, in {{FsCommand#processRawArguments}}, it uses 
 {{getCommandName()}}, which uses reflection to find {{static String NAME}} 
 field, to build error message. But in the tests, the commands are built 
 without {{static String NAME}} field, since it is not inherited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610563#comment-14610563
 ] 

Hudson commented on HADOOP-12116:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
HADOOP-12116. Fix unrecommended syntax usages in hadoop/hdfs/yarn script for 
cygwin in branch-2. Contributed by Li Lu. (cnauroth: rev 
b8e792cba257fdb0ca266ecb2f60f3f10c3a0c3b)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
 branch-2
 -

 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Li Lu
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HADOOP-12116-branch-2.001.patch


 We're using syntax like if $cygwin; then which may be errorounsly evaluated 
 into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610734#comment-14610734
 ] 

Colin Patrick McCabe commented on HADOOP-12171:
---

This is sort of a continuation of HDFS-7223, which shortened most trace span 
names.  But there were a few we missed.

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609982#comment-14609982
 ] 

Hudson commented on HADOOP-12124:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
HADOOP-12124. Add HTrace support for FsShell (cmccabe) (cmccabe: rev 
ad60807238c4f7779cb0685e7d39ca0c50e01b2f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.8.0

 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12158) Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609996#comment-14609996
 ] 

Hudson commented on HADOOP-12158:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
HADOOP-12158. Improve error message in 
TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed. 
(wang: rev 9ee7b6e6c4ab6bee6304fa7904993c7cbd9a6cd2)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when 
 OpenSSL is not installed
 

 Key: HADOOP-12158
 URL: https://issues.apache.org/jira/browse/HADOOP-12158
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.8.0

 Attachments: hadoop-12158.001.patch


 Trivial, rather than throwing an NPE, let's print a nicer error message via 
 an assertNotNull.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12149) copy all of test-patch BINDIR prior to re-exec

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609993#comment-14609993
 ] 

Hudson commented on HADOOP-12149:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
HADOOP-12149. copy all of test-patch BINDIR prior to re-exec (aw) (aw: rev 
147e020c7aef3ba42eddcef3be1b4ae7c7910371)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


 copy all of test-patch BINDIR prior to re-exec
 --

 Key: HADOOP-12149
 URL: https://issues.apache.org/jira/browse/HADOOP-12149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: 3.0.0, HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-12149.00.patch


 During some tests (e.g., 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7090 ), initial mvn 
 install triggered a full test suite run when Jenkins switches from old 
 test-patch to new test-patch.  This is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() should always return a sorted list of files

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609984#comment-14609984
 ] 

Hudson commented on HADOOP-10798:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
HADOOP-10798. globStatus() should always return a sorted list of files 
(cmccabe) (cmccabe: rev 68e588cbee660d55dba518892d064bee3795a002)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java


 globStatus() should always return a sorted list of files
 

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Assignee: Colin Patrick McCabe
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10798.001.patch


 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609989#comment-14609989
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
Revert HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: 
rev 076948d9a4053cc8be1927005c797273bae85e99)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609995#comment-14609995
 ] 

Hudson commented on HADOOP-12159:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
HADOOP-12159. Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and 
fix for HA namespaces (rchiang via rkanter) (rkanter: rev 
aaafa0b2ee64f6cfb7fdc717500e1c483b9df8cc)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609983#comment-14609983
 ] 

Hudson commented on HADOOP-12116:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
HADOOP-12116. Fix unrecommended syntax usages in hadoop/hdfs/yarn script for 
cygwin in branch-2. Contributed by Li Lu. (cnauroth: rev 
b8e792cba257fdb0ca266ecb2f60f3f10c3a0c3b)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
 branch-2
 -

 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Li Lu
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HADOOP-12116-branch-2.001.patch


 We're using syntax like if $cygwin; then which may be errorounsly evaluated 
 into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12164) Fix TestMove and TestFsShellReturnCode failed to get command name using reflection.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609991#comment-14609991
 ] 

Hudson commented on HADOOP-12164:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #975 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/975/])
 HADOOP-12164. Fix TestMove and TestFsShellReturnCode failed to get command 
name using reflection. (Lei Xu) (lei: rev 
532e38cb7f70606c2c96d05259670e1e91d60ab3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java


 Fix TestMove and TestFsShellReturnCode failed to get command name using 
 reflection.
 ---

 Key: HADOOP-12164
 URL: https://issues.apache.org/jira/browse/HADOOP-12164
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-12164.000.patch


 When enabled {{hadoop.shell.missing.defaultFs.warning}}, a few tests were 
 failed as following:
 {noformat}
 java.lang.RuntimeException: failed to get .NAME
   at java.lang.Class.getDeclaredField(Class.java:1948)
   at org.apache.hadoop.fs.shell.Command.getCommandField(Command.java:458)
   at org.apache.hadoop.fs.shell.Command.getName(Command.java:401)
   at 
 org.apache.hadoop.fs.shell.FsCommand.getCommandName(FsCommand.java:80)
   at 
 org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:111)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at 
 org.apache.hadoop.fs.TestFsShellReturnCode.testChgrpGroupValidity(TestFsShellReturnCode.java:434)
 {noformat}
 The reason is that, in {{FsCommand#processRawArguments}}, it uses 
 {{getCommandName()}}, which uses reflection to find {{static String NAME}} 
 field, to build error message. But in the tests, the commands are built 
 without {{static String NAME}} field, since it is not inherited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-07-01 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12114:
---
Status: Patch Available  (was: Open)

 Make hadoop-tools/hadoop-pipes Native code -Wall-clean
 --

 Key: HADOOP-12114
 URL: https://issues.apache.org/jira/browse/HADOOP-12114
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Attachments: HADOOP-12114.001.patch, HADOOP-12114.002.patch


 As we specify -Wall as a default compilation flag, it would be helpful if the 
 Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-07-01 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12114:
---
Attachment: HADOOP-12114.002.patch

Updated patch with BIO_flush error handling

 Make hadoop-tools/hadoop-pipes Native code -Wall-clean
 --

 Key: HADOOP-12114
 URL: https://issues.apache.org/jira/browse/HADOOP-12114
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Attachments: HADOOP-12114.001.patch, HADOOP-12114.002.patch


 As we specify -Wall as a default compilation flag, it would be helpful if the 
 Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11974) FIONREAD is not always in the same header file

2015-07-01 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-11974:
---
Status: Patch Available  (was: Open)

 FIONREAD is not always in the same header file
 --

 Key: HADOOP-11974
 URL: https://issues.apache.org/jira/browse/HADOOP-11974
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor
 Attachments: HADOOP-11974.001.patch


 The FIONREAD macro is found in sys/ioctl.h on Linux and sys/filio.h on 
 Solaris. A conditional include block is required to make sure it is looked 
 for in the right place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12169) ListStatus on empty dir in S3A lists itself instead of returning an empty list

2015-07-01 Thread Pieter Reuse (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12169 started by Pieter Reuse.
-
 ListStatus on empty dir in S3A lists itself instead of returning an empty list
 --

 Key: HADOOP-12169
 URL: https://issues.apache.org/jira/browse/HADOOP-12169
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse

 Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour 
 this introduces to the S3AFileSystem-class. Calling ListStatus() on an empty 
 bucket returns an empty list, while doing the same on an empty directory, 
 returns an array of length 1 containing only this directory itself.
 The bugfix is quite simple. In the line of code {code}...if 
 (keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. 
 the fs and f is not. Therefore, this returns false while it shouldn't. The 
 bugfix to make f qualified in this line of code.
 More formally: accoring to the formal definition of [The Hadoop FileSystem 
 API 
 Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
  more specifically FileSystem.listStatus, only child elements of a directory 
 should be returned upon a listStatus()-call.
 In detail: 
 {code}
 elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where 
 f(c) == True]
 {code}
 and
 {code}
 def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
 {code}
 Which translates to the result of listStatus on an empty directory being an 
 empty list. This is the same behaviour as ls has in Unix, which is what 
 someone would expect from a FileSystem.
 Note: it seemed appropriate to add the test of this patch to the same file as 
 the test for HADOOP-11918, but as a result, one of the two will have to be 
 rebased wrt. the other before being applied to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12169) ListStatus on empty dir in S3A lists itself instead of returning an empty list

2015-07-01 Thread Pieter Reuse (JIRA)
Pieter Reuse created HADOOP-12169:
-

 Summary: ListStatus on empty dir in S3A lists itself instead of 
returning an empty list
 Key: HADOOP-12169
 URL: https://issues.apache.org/jira/browse/HADOOP-12169
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse


Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour this 
introduces to the S3AFileSystem-class. Calling ListStatus() on an empty bucket 
returns an empty list, while doing the same on an empty directory, returns an 
array of length 1 containing only this directory itself.

The bugfix is quite simple. In the line of code {code}...if 
(keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. the 
fs and f is not. Therefore, this returns false while it shouldn't. The bugfix 
to make f qualified in this line of code.

More formally: accoring to the formal definition of [The Hadoop FileSystem API 
Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
 more specifically FileSystem.listStatus, only child elements of a directory 
should be returned upon a listStatus()-call.

In detail: 
{code}
elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where f(c) 
== True]
{code}
and
{code}
def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
{code}

Which translates to the result of listStatus on an empty directory being an 
empty list. This is the same behaviour as ls has in Unix, which is what someone 
would expect from a FileSystem.

Note: it seemed appropriate to add the test of this patch to the same file as 
the test for HADOOP-11918, but as a result, one of the two will have to be 
rebased wrt. the other before being applied to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12170) hadoop-common's JNIFlags.cmake is redundant and can be removed

2015-07-01 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12170:
---
Priority: Minor  (was: Major)

 hadoop-common's JNIFlags.cmake is redundant and can be removed
 --

 Key: HADOOP-12170
 URL: https://issues.apache.org/jira/browse/HADOOP-12170
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor

 With the integration of:
 * HADOOP-12036 Consolidate all of the cmake extensions in one *directory
 * HADOOP-12104 Migrate Hadoop Pipes native build to new CMake
 * HDFS-8635 Migrate HDFS native build to new CMake framework
 * MAPREDUCE-6407 Migrate MAPREDUCE native build to new CMake
 * YARN-3827 Migrate YARN native build to new CMake framework
 hadoop-common-project/hadoop-common/src/JNIFlags.cmake is now redundant and 
 can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12170) hadoop-common's JNIFlags.cmake is redundant and can be removed

2015-07-01 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated HADOOP-12170:
---
Description: 
With the integration of:

* HADOOP-12036 Consolidate all of the cmake extensions in one *directory
* HADOOP-12104 Migrate Hadoop Pipes native build to new CMake
* HDFS-8635 Migrate HDFS native build to new CMake framework
* MAPREDUCE-6407 Migrate MAPREDUCE native build to new CMake
* YARN-3827 Migrate YARN native build to new CMake framework

hadoop-common-project/hadoop-common/src/JNIFlags.cmake is now redundant and can 
be removed

  was:
With the integration of:

* HADOOP-12036 Consolidate all of the cmake extensions in one *directory
* HADOOP-12104 Migrate Hadoop Pipes native build to new CMake
* HDFS-8635 Migrate HDFS native build to new CMake framework
* MAPREDUCE-6407 Migrate MAPREDUCE native build to new CMake
YARN-3827 Migrate YARN native build to new CMake framework

hadoop-common-project/hadoop-common/src/JNIFlags.cmake is now redundant and can 
be removed


 hadoop-common's JNIFlags.cmake is redundant and can be removed
 --

 Key: HADOOP-12170
 URL: https://issues.apache.org/jira/browse/HADOOP-12170
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Reporter: Alan Burlison
Assignee: Alan Burlison

 With the integration of:
 * HADOOP-12036 Consolidate all of the cmake extensions in one *directory
 * HADOOP-12104 Migrate Hadoop Pipes native build to new CMake
 * HDFS-8635 Migrate HDFS native build to new CMake framework
 * MAPREDUCE-6407 Migrate MAPREDUCE native build to new CMake
 * YARN-3827 Migrate YARN native build to new CMake framework
 hadoop-common-project/hadoop-common/src/JNIFlags.cmake is now redundant and 
 can be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12164) Fix TestMove and TestFsShellReturnCode failed to get command name using reflection.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610355#comment-14610355
 ] 

Hudson commented on HADOOP-12164:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
 HADOOP-12164. Fix TestMove and TestFsShellReturnCode failed to get command 
name using reflection. (Lei Xu) (lei: rev 
532e38cb7f70606c2c96d05259670e1e91d60ab3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java


 Fix TestMove and TestFsShellReturnCode failed to get command name using 
 reflection.
 ---

 Key: HADOOP-12164
 URL: https://issues.apache.org/jira/browse/HADOOP-12164
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-12164.000.patch


 When enabled {{hadoop.shell.missing.defaultFs.warning}}, a few tests were 
 failed as following:
 {noformat}
 java.lang.RuntimeException: failed to get .NAME
   at java.lang.Class.getDeclaredField(Class.java:1948)
   at org.apache.hadoop.fs.shell.Command.getCommandField(Command.java:458)
   at org.apache.hadoop.fs.shell.Command.getName(Command.java:401)
   at 
 org.apache.hadoop.fs.shell.FsCommand.getCommandName(FsCommand.java:80)
   at 
 org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:111)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at 
 org.apache.hadoop.fs.TestFsShellReturnCode.testChgrpGroupValidity(TestFsShellReturnCode.java:434)
 {noformat}
 The reason is that, in {{FsCommand#processRawArguments}}, it uses 
 {{getCommandName()}}, which uses reflection to find {{static String NAME}} 
 field, to build error message. But in the tests, the commands are built 
 without {{static String NAME}} field, since it is not inherited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12154) FileSystem#getUsed() returns the file length only from root '/'

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610372#comment-14610372
 ] 

Hudson commented on HADOOP-12154:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12154. FileSystem#getUsed() returns the file length only from root '/' 
(Contributed by J.Andreina) (vinayakumarb: rev 
6d99017f38f5a158b5cb65c74688b4c833e4e35f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 FileSystem#getUsed() returns the file length only from root '/'
 ---

 Key: HADOOP-12154
 URL: https://issues.apache.org/jira/browse/HADOOP-12154
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-8525.1.patch


 getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12169) ListStatus on empty dir in S3A lists itself instead of returning an empty list

2015-07-01 Thread Pieter Reuse (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pieter Reuse updated HADOOP-12169:
--
Description: 
Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour this 
introduces to the S3AFileSystem-class. Calling ListStatus() on an empty bucket 
returns an empty list, while doing the same on an empty directory, returns an 
array of length 1 containing only this directory itself.

The bugfix is quite simple. In the line of code {code}...if 
(keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. the 
fs and f is not. Therefore, this returns false while it shouldn't. The bugfix 
to make f qualified in this line of code.

More formally: accoring to the formal definition of [The Hadoop FileSystem API 
Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
 more specifically FileSystem.listStatus, only child elements of a directory 
should be returned upon a listStatus()-call.

In detail: 
{code}
elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where f(c) 
== True]
{code}
and
{code}
def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
{code}

Which translates to the result of listStatus on an empty directory being an 
empty list. This is the same behaviour as ls has in Unix, which is what someone 
would expect from a FileSystem.

Note: it seemed appropriate to add the test of this patch to the same file as 
the test for HADOOP-11918, but as a result, one of the two will have to be 
rebased wrt. the other before being applied to trunk.

  was:
Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour this 
introduces to the S3AFileSystem-class. Calling ListStatus() on an empty bucket 
returns an empty list, while doing the same on an empty directory, returns an 
array of length 1 containing only this directory itself.

The bugfix is quite simple. In the line of code {code}...if 
(keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. the 
fs and f is not. Therefore, this returns false while it shouldn't. The bugfix 
to make f qualified in this line of code.

More formally: accoring to the formal definition of [The Hadoop FileSystem API 
Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
 more specifically FileSystem.listStatus, only child elements of a directory 
should be returned upon a listStatus()-call.

In detail: 
{code}
elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where f(c) 
== True]
{code}
and
{code}
def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
{code}

Which translates to the result of listStatus on an empty directory being an 
empty list. This is the same behaviour as ls has in Unix, which is what someone 
would expect from a FileSystem.

Note: it seemed appropriate to add the test of this patch to the same file as 
the test for HADOOP-11918, but as a result, one of the two will have to be 
rebased wrt. the other before being applied to trunk.


 ListStatus on empty dir in S3A lists itself instead of returning an empty list
 --

 Key: HADOOP-12169
 URL: https://issues.apache.org/jira/browse/HADOOP-12169
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse
 Attachments: HADOOP-12169-001.patch


 Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour 
 this introduces to the S3AFileSystem-class. Calling ListStatus() on an empty 
 bucket returns an empty list, while doing the same on an empty directory, 
 returns an array of length 1 containing only this directory itself.
 The bugfix is quite simple. In the line of code {code}...if 
 (keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. 
 the fs and f is not. Therefore, this returns false while it shouldn't. The 
 bugfix to make f qualified in this line of code.
 More formally: accoring to the formal definition of [The Hadoop FileSystem 
 API 
 Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
  more specifically FileSystem.listStatus, only child elements of a directory 
 should be returned upon a listStatus()-call.
 In detail: 
 {code}
 elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where 
 f(c) == True]
 {code}
 and
 {code}
 def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
 {code}
 Which translates to the result of listStatus on an empty directory being an 
 empty list. This is the same behaviour as ls has in Unix, which is what 
 someone would expect from a FileSystem.
 Note: it seemed appropriate to add the test of this patch to the same file as 
 the test for 

[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610306#comment-14610306
 ] 

Hudson commented on HADOOP-12107:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12107. long running apps may have a huge number of StatisticsData 
instances under FileSystem (Sangjin Lee via Ming Ma) (mingma: rev 
8e1bdc17d9134e01115ae7c929503d8ac0325207)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FCStatisticsBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Fix For: 2.8.0

 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12104) Migrate Hadoop Pipes native build to new CMake framework

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610307#comment-14610307
 ] 

Hudson commented on HADOOP-12104:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12104. Migrate Hadoop Pipes native build to new CMake framework (Alan 
Burlison via Colin P. McCabe) (cmccabe: rev 
5a27c3fd7616215937264c2b1f015205e60f2d73)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-pipes/src/CMakeLists.txt


 Migrate Hadoop Pipes native build to new CMake framework
 

 Key: HADOOP-12104
 URL: https://issues.apache.org/jira/browse/HADOOP-12104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Fix For: 2.8.0

 Attachments: HADOOP-12104.001.patch


 As per HADOOP-12036, the CMake infrastructure should be refactored and made 
 common across all Hadoop components. This bug covers the migration of Hadoop 
 Pipes to the new CMake infrastructure. This change will also add support for 
 building Hadoop Pipes Native components on Solaris.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12158) Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610333#comment-14610333
 ] 

Hudson commented on HADOOP-12158:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12158. Improve error message in 
TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed. 
(wang: rev 9ee7b6e6c4ab6bee6304fa7904993c7cbd9a6cd2)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java


 Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when 
 OpenSSL is not installed
 

 Key: HADOOP-12158
 URL: https://issues.apache.org/jira/browse/HADOOP-12158
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.8.0

 Attachments: hadoop-12158.001.patch


 Trivial, rather than throwing an NPE, let's print a nicer error message via 
 an assertNotNull.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12164) Fix TestMove and TestFsShellReturnCode failed to get command name using reflection.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610313#comment-14610313
 ] 

Hudson commented on HADOOP-12164:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
 HADOOP-12164. Fix TestMove and TestFsShellReturnCode failed to get command 
name using reflection. (Lei Xu) (lei: rev 
532e38cb7f70606c2c96d05259670e1e91d60ab3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java


 Fix TestMove and TestFsShellReturnCode failed to get command name using 
 reflection.
 ---

 Key: HADOOP-12164
 URL: https://issues.apache.org/jira/browse/HADOOP-12164
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-12164.000.patch


 When enabled {{hadoop.shell.missing.defaultFs.warning}}, a few tests were 
 failed as following:
 {noformat}
 java.lang.RuntimeException: failed to get .NAME
   at java.lang.Class.getDeclaredField(Class.java:1948)
   at org.apache.hadoop.fs.shell.Command.getCommandField(Command.java:458)
   at org.apache.hadoop.fs.shell.Command.getName(Command.java:401)
   at 
 org.apache.hadoop.fs.shell.FsCommand.getCommandName(FsCommand.java:80)
   at 
 org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:111)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at 
 org.apache.hadoop.fs.TestFsShellReturnCode.testChgrpGroupValidity(TestFsShellReturnCode.java:434)
 {noformat}
 The reason is that, in {{FsCommand#processRawArguments}}, it uses 
 {{getCommandName()}}, which uses reflection to find {{static String NAME}} 
 field, to build error message. But in the tests, the commands are built 
 without {{static String NAME}} field, since it is not inherited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610321#comment-14610321
 ] 

Hudson commented on HADOOP-12116:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12116. Fix unrecommended syntax usages in hadoop/hdfs/yarn script for 
cygwin in branch-2. Contributed by Li Lu. (cnauroth: rev 
b8e792cba257fdb0ca266ecb2f60f3f10c3a0c3b)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
 branch-2
 -

 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Li Lu
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HADOOP-12116-branch-2.001.patch


 We're using syntax like if $cygwin; then which may be errorounsly evaluated 
 into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12089) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610315#comment-14610315
 ] 

Hudson commented on HADOOP-12089:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12089. StorageException complaining  no lease ID when updating 
FolderLastModifiedTime in WASB. Contributed by Duo Xu. (cnauroth: rev 
460e98f7b3ec84f3c5afcb2aad4f4e7031d16e3a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java


 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12089
 URL: https://issues.apache.org/jira/browse/HADOOP-12089
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Fix For: 2.8.0

 Attachments: HADOOP-12089.01.patch, HADOOP-12089.02.patch


 This is a similar issue to HADOOP-11523. HADOOP-11523 happens when HBase is 
 doing distributed log splitting. This JIRA happens when HBase is deleting old 
 WALs and trying to update /hbase/oldWALs folder.
 The fix is the same as HADOOP-11523.
 {code}
 2015-06-10 08:11:40,636 WARN 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while deleting: 
 wasb://basecus...@basestoragecus1.blob.core.windows.net/hbase/oldWALs/workernode10.dthbasecus1.g1.internal.cloudapp.net%2C60020%2C1433908062461.1433921692855
 org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2602)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2613)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1505)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1437)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:256)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
   at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:162)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:177)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2991)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2597)
   ... 8 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12169) ListStatus on empty dir in S3A lists itself instead of returning an empty list

2015-07-01 Thread Pieter Reuse (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pieter Reuse updated HADOOP-12169:
--
Status: Patch Available  (was: In Progress)

 ListStatus on empty dir in S3A lists itself instead of returning an empty list
 --

 Key: HADOOP-12169
 URL: https://issues.apache.org/jira/browse/HADOOP-12169
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse
 Attachments: HADOOP-12169-001.patch


 Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour 
 this introduces to the S3AFileSystem-class. Calling ListStatus() on an empty 
 bucket returns an empty list, while doing the same on an empty directory, 
 returns an array of length 1 containing only this directory itself.
 The bugfix is quite simple. In the line of code {code}...if 
 (keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. 
 the fs and f is not. Therefore, this returns false while it shouldn't. The 
 bugfix to make f qualified in this line of code.
 More formally: accoring to the formal definition of [The Hadoop FileSystem 
 API 
 Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
  more specifically FileSystem.listStatus, only child elements of a directory 
 should be returned upon a listStatus()-call.
 In detail: 
 {code}
 elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where 
 f(c) == True]
 {code}
 and
 {code}
 def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
 {code}
 Which translates to the result of listStatus on an empty directory being an 
 empty list. This is the same behaviour as ls has in Unix, which is what 
 someone would expect from a FileSystem.
 Note: it seemed appropriate to add the test of this patch to the same file as 
 the test for HADOOP-11918, but as a result, one of the two will have to be 
 rebased wrt. the other before being applied to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12170) hadoop-common's JNIFlags.cmake is redundant and can be removed

2015-07-01 Thread Alan Burlison (JIRA)
Alan Burlison created HADOOP-12170:
--

 Summary: hadoop-common's JNIFlags.cmake is redundant and can be 
removed
 Key: HADOOP-12170
 URL: https://issues.apache.org/jira/browse/HADOOP-12170
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Reporter: Alan Burlison
Assignee: Alan Burlison


With the integration of:

* HADOOP-12036 Consolidate all of the cmake extensions in one *directory
* HADOOP-12104 Migrate Hadoop Pipes native build to new CMake
* HDFS-8635 Migrate HDFS native build to new CMake framework
* MAPREDUCE-6407 Migrate MAPREDUCE native build to new CMake
YARN-3827 Migrate YARN native build to new CMake framework

hadoop-common-project/hadoop-common/src/JNIFlags.cmake is now redundant and can 
be removed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12112) Make hadoop-common-project Native code -Wall-clean

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610326#comment-14610326
 ] 

Hudson commented on HADOOP-12112:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12112. Make hadoop-common-project Native code -Wall-clean (alanburlison 
via cmccabe) (cmccabe: rev fad291ea6dbe49782e33a32cd6608088951e2c58)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
* hadoop-common-project/hadoop-common/CHANGES.txt


 Make hadoop-common-project Native code -Wall-clean
 --

 Key: HADOOP-12112
 URL: https://issues.apache.org/jira/browse/HADOOP-12112
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Fix For: 2.8.0

 Attachments: HADOOP-12112.001.patch


 As we specify -Wall as a default compilation flag, it would be helpful if the 
 Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() should always return a sorted list of files

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610324#comment-14610324
 ] 

Hudson commented on HADOOP-10798:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-10798. globStatus() should always return a sorted list of files 
(cmccabe) (cmccabe: rev 68e588cbee660d55dba518892d064bee3795a002)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 globStatus() should always return a sorted list of files
 

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Assignee: Colin Patrick McCabe
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10798.001.patch


 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12149) copy all of test-patch BINDIR prior to re-exec

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610316#comment-14610316
 ] 

Hudson commented on HADOOP-12149:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12149. copy all of test-patch BINDIR prior to re-exec (aw) (aw: rev 
147e020c7aef3ba42eddcef3be1b4ae7c7910371)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


 copy all of test-patch BINDIR prior to re-exec
 --

 Key: HADOOP-12149
 URL: https://issues.apache.org/jira/browse/HADOOP-12149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: 3.0.0, HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-12149.00.patch


 During some tests (e.g., 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7090 ), initial mvn 
 install triggered a full test suite run when Jenkins switches from old 
 test-patch to new test-patch.  This is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610305#comment-14610305
 ] 

Hudson commented on HADOOP-12124:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12124. Add HTrace support for FsShell (cmccabe) (cmccabe: rev 
ad60807238c4f7779cb0685e7d39ca0c50e01b2f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java


 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.8.0

 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12154) FileSystem#getUsed() returns the file length only from root '/'

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610330#comment-14610330
 ] 

Hudson commented on HADOOP-12154:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12154. FileSystem#getUsed() returns the file length only from root '/' 
(Contributed by J.Andreina) (vinayakumarb: rev 
6d99017f38f5a158b5cb65c74688b4c833e4e35f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java


 FileSystem#getUsed() returns the file length only from root '/'
 ---

 Key: HADOOP-12154
 URL: https://issues.apache.org/jira/browse/HADOOP-12154
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: tongshiquan
Assignee: J.Andreina
 Fix For: 2.8.0

 Attachments: HDFS-8525.1.patch


 getUsed should return total HDFS used, compared to getStatus.getUsed



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610312#comment-14610312
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
Revert HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: 
rev 076948d9a4053cc8be1927005c797273bae85e99)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610332#comment-14610332
 ] 

Hudson commented on HADOOP-12159:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2172 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2172/])
HADOOP-12159. Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and 
fix for HA namespaces (rchiang via rkanter) (rkanter: rev 
aaafa0b2ee64f6cfb7fdc717500e1c483b9df8cc)
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java


 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12104) Migrate Hadoop Pipes native build to new CMake framework

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610349#comment-14610349
 ] 

Hudson commented on HADOOP-12104:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12104. Migrate Hadoop Pipes native build to new CMake framework (Alan 
Burlison via Colin P. McCabe) (cmccabe: rev 
5a27c3fd7616215937264c2b1f015205e60f2d73)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-pipes/src/CMakeLists.txt


 Migrate Hadoop Pipes native build to new CMake framework
 

 Key: HADOOP-12104
 URL: https://issues.apache.org/jira/browse/HADOOP-12104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Fix For: 2.8.0

 Attachments: HADOOP-12104.001.patch


 As per HADOOP-12036, the CMake infrastructure should be refactored and made 
 common across all Hadoop components. This bug covers the migration of Hadoop 
 Pipes to the new CMake infrastructure. This change will also add support for 
 building Hadoop Pipes Native components on Solaris.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610347#comment-14610347
 ] 

Hudson commented on HADOOP-12124:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12124. Add HTrace support for FsShell (cmccabe) (cmccabe: rev 
ad60807238c4f7779cb0685e7d39ca0c50e01b2f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java


 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.8.0

 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610354#comment-14610354
 ] 

Hudson commented on HADOOP-12009:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
Revert HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: 
rev 076948d9a4053cc8be1927005c797273bae85e99)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() should always return a sorted list of files

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610366#comment-14610366
 ] 

Hudson commented on HADOOP-10798:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-10798. globStatus() should always return a sorted list of files 
(cmccabe) (cmccabe: rev 68e588cbee660d55dba518892d064bee3795a002)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java


 globStatus() should always return a sorted list of files
 

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Assignee: Colin Patrick McCabe
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10798.001.patch


 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12089) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610357#comment-14610357
 ] 

Hudson commented on HADOOP-12089:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12089. StorageException complaining  no lease ID when updating 
FolderLastModifiedTime in WASB. Contributed by Duo Xu. (cnauroth: rev 
460e98f7b3ec84f3c5afcb2aad4f4e7031d16e3a)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12089
 URL: https://issues.apache.org/jira/browse/HADOOP-12089
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Fix For: 2.8.0

 Attachments: HADOOP-12089.01.patch, HADOOP-12089.02.patch


 This is a similar issue to HADOOP-11523. HADOOP-11523 happens when HBase is 
 doing distributed log splitting. This JIRA happens when HBase is deleting old 
 WALs and trying to update /hbase/oldWALs folder.
 The fix is the same as HADOOP-11523.
 {code}
 2015-06-10 08:11:40,636 WARN 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while deleting: 
 wasb://basecus...@basestoragecus1.blob.core.windows.net/hbase/oldWALs/workernode10.dthbasecus1.g1.internal.cloudapp.net%2C60020%2C1433908062461.1433921692855
 org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2602)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2613)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1505)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1437)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteFiles(CleanerChore.java:256)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.checkAndDeleteEntries(CleanerChore.java:157)
   at 
 org.apache.hadoop.hbase.master.cleaner.CleanerChore.chore(CleanerChore.java:124)
   at org.apache.hadoop.hbase.Chore.run(Chore.java:80)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:162)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:177)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2991)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2597)
   ... 8 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610374#comment-14610374
 ] 

Hudson commented on HADOOP-12159:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12159. Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and 
fix for HA namespaces (rchiang via rkanter) (rkanter: rev 
aaafa0b2ee64f6cfb7fdc717500e1c483b9df8cc)
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12149) copy all of test-patch BINDIR prior to re-exec

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610358#comment-14610358
 ] 

Hudson commented on HADOOP-12149:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12149. copy all of test-patch BINDIR prior to re-exec (aw) (aw: rev 
147e020c7aef3ba42eddcef3be1b4ae7c7910371)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


 copy all of test-patch BINDIR prior to re-exec
 --

 Key: HADOOP-12149
 URL: https://issues.apache.org/jira/browse/HADOOP-12149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: 3.0.0, HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-12149.00.patch


 During some tests (e.g., 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7090 ), initial mvn 
 install triggered a full test suite run when Jenkins switches from old 
 test-patch to new test-patch.  This is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12158) Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610375#comment-14610375
 ] 

Hudson commented on HADOOP-12158:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12158. Improve error message in 
TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed. 
(wang: rev 9ee7b6e6c4ab6bee6304fa7904993c7cbd9a6cd2)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java


 Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when 
 OpenSSL is not installed
 

 Key: HADOOP-12158
 URL: https://issues.apache.org/jira/browse/HADOOP-12158
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.8.0

 Attachments: hadoop-12158.001.patch


 Trivial, rather than throwing an NPE, let's print a nicer error message via 
 an assertNotNull.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610348#comment-14610348
 ] 

Hudson commented on HADOOP-12107:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12107. long running apps may have a huge number of StatisticsData 
instances under FileSystem (Sangjin Lee via Ming Ma) (mingma: rev 
8e1bdc17d9134e01115ae7c929503d8ac0325207)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FCStatisticsBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java


 long running apps may have a huge number of StatisticsData instances under 
 FileSystem
 -

 Key: HADOOP-12107
 URL: https://issues.apache.org/jira/browse/HADOOP-12107
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee
Priority: Critical
 Fix For: 2.8.0

 Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
 HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch


 We observed with some of our apps (non-mapreduce apps that use filesystems) 
 that they end up accumulating a huge memory footprint coming from 
 {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
 {{Statistics}}).
 Although the thread reference from {{StatisticsData}} is a weak reference, 
 and thus can get cleared once a thread goes away, the actual 
 {{StatisticsData}} instances in the list won't get cleared until any of these 
 following methods is called on {{Statistics}}:
 - {{getBytesRead()}}
 - {{getBytesWritten()}}
 - {{getReadOps()}}
 - {{getLargeReadOps()}}
 - {{getWriteOps()}}
 - {{toString()}}
 It is quite possible to have an application that interacts with a filesystem 
 but does not call any of these methods on the {{Statistics}}. If such an 
 application runs for a long time and has a large amount of thread churn, the 
 memory footprint will grow significantly.
 The current workaround is either to limit the thread churn or to invoke these 
 operations occasionally to pare down the memory. However, this is still a 
 deficiency with {{FileSystem$Statistics}} itself in that the memory is 
 controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610363#comment-14610363
 ] 

Hudson commented on HADOOP-12116:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12116. Fix unrecommended syntax usages in hadoop/hdfs/yarn script for 
cygwin in branch-2. Contributed by Li Lu. (cnauroth: rev 
b8e792cba257fdb0ca266ecb2f60f3f10c3a0c3b)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
 branch-2
 -

 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Li Lu
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HADOOP-12116-branch-2.001.patch


 We're using syntax like if $cygwin; then which may be errorounsly evaluated 
 into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12169) ListStatus on empty dir in S3A lists itself instead of returning an empty list

2015-07-01 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12169:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-11694

 ListStatus on empty dir in S3A lists itself instead of returning an empty list
 --

 Key: HADOOP-12169
 URL: https://issues.apache.org/jira/browse/HADOOP-12169
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse
 Attachments: HADOOP-12169-001.patch


 Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour 
 this introduces to the S3AFileSystem-class. Calling ListStatus() on an empty 
 bucket returns an empty list, while doing the same on an empty directory, 
 returns an array of length 1 containing only this directory itself.
 The bugfix is quite simple. In the line of code {code}...if 
 (keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. 
 the fs and f is not. Therefore, this returns false while it shouldn't. The 
 bugfix to make f qualified in this line of code.
 More formally: accoring to the formal definition of [The Hadoop FileSystem 
 API 
 Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
  more specifically FileSystem.listStatus, only child elements of a directory 
 should be returned upon a listStatus()-call.
 In detail: 
 {code}
 elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where 
 f(c) == True]
 {code}
 and
 {code}
 def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
 {code}
 Which translates to the result of listStatus on an empty directory being an 
 empty list. This is the same behaviour as ls has in Unix, which is what 
 someone would expect from a FileSystem.
 Note: it seemed appropriate to add the test of this patch to the same file as 
 the test for HADOOP-11918, but as a result, one of the two will have to be 
 rebased wrt. the other before being applied to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12158) Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610012#comment-14610012
 ] 

Hudson commented on HADOOP-12158:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
HADOOP-12158. Improve error message in 
TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed. 
(wang: rev 9ee7b6e6c4ab6bee6304fa7904993c7cbd9a6cd2)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when 
 OpenSSL is not installed
 

 Key: HADOOP-12158
 URL: https://issues.apache.org/jira/browse/HADOOP-12158
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.8.0

 Attachments: hadoop-12158.001.patch


 Trivial, rather than throwing an NPE, let's print a nicer error message via 
 an assertNotNull.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12164) Fix TestMove and TestFsShellReturnCode failed to get command name using reflection.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610007#comment-14610007
 ] 

Hudson commented on HADOOP-12164:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
 HADOOP-12164. Fix TestMove and TestFsShellReturnCode failed to get command 
name using reflection. (Lei Xu) (lei: rev 
532e38cb7f70606c2c96d05259670e1e91d60ab3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java


 Fix TestMove and TestFsShellReturnCode failed to get command name using 
 reflection.
 ---

 Key: HADOOP-12164
 URL: https://issues.apache.org/jira/browse/HADOOP-12164
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-12164.000.patch


 When enabled {{hadoop.shell.missing.defaultFs.warning}}, a few tests were 
 failed as following:
 {noformat}
 java.lang.RuntimeException: failed to get .NAME
   at java.lang.Class.getDeclaredField(Class.java:1948)
   at org.apache.hadoop.fs.shell.Command.getCommandField(Command.java:458)
   at org.apache.hadoop.fs.shell.Command.getName(Command.java:401)
   at 
 org.apache.hadoop.fs.shell.FsCommand.getCommandName(FsCommand.java:80)
   at 
 org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:111)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at 
 org.apache.hadoop.fs.TestFsShellReturnCode.testChgrpGroupValidity(TestFsShellReturnCode.java:434)
 {noformat}
 The reason is that, in {{FsCommand#processRawArguments}}, it uses 
 {{getCommandName()}}, which uses reflection to find {{static String NAME}} 
 field, to build error message. But in the tests, the commands are built 
 without {{static String NAME}} field, since it is not inherited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610011#comment-14610011
 ] 

Hudson commented on HADOOP-12159:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
HADOOP-12159. Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and 
fix for HA namespaces (rchiang via rkanter) (rkanter: rev 
aaafa0b2ee64f6cfb7fdc717500e1c483b9df8cc)
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12009) Clarify FileSystem.listStatus() sorting order fix FileSystemContractBaseTest:testListStatus

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610005#comment-14610005
 ] 

Hudson commented on HADOOP-12009:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
Revert HADOOP-12009 Clarify FileSystem.listStatus() sorting order  fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel) (stevel: 
rev 076948d9a4053cc8be1927005c797273bae85e99)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
* hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md


 Clarify FileSystem.listStatus() sorting order  fix 
 FileSystemContractBaseTest:testListStatus 
 --

 Key: HADOOP-12009
 URL: https://issues.apache.org/jira/browse/HADOOP-12009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Jakob Homan
Assignee: J.Andreina
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12009-003.patch, HADOOP-12009-004.patch, 
 HADOOP-12009.1.patch


 FileSystem.listStatus does not guarantee that implementations will return 
 sorted entries:
 {code}  /**
* List the statuses of the files/directories in the given path if the path 
 is
* a directory.
* 
* @param f given path
* @return the statuses of the files/directories in the given patch
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
   public abstract FileStatus[] listStatus(Path f) throws 
 FileNotFoundException, 
  IOException;{code}
 However, FileSystemContractBaseTest, expects the elements to come back sorted:
 {code}Path[] testDirs = { path(/test/hadoop/a),
 path(/test/hadoop/b),
 path(/test/hadoop/c/1), };

 // ...
 paths = fs.listStatus(path(/test/hadoop));
 assertEquals(3, paths.length);
 assertEquals(path(/test/hadoop/a), paths[0].getPath());
 assertEquals(path(/test/hadoop/b), paths[1].getPath());
 assertEquals(path(/test/hadoop/c), paths[2].getPath());{code}
 We should pass this test as long as all the paths are there, regardless of 
 their ordering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12149) copy all of test-patch BINDIR prior to re-exec

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610009#comment-14610009
 ] 

Hudson commented on HADOOP-12149:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
HADOOP-12149. copy all of test-patch BINDIR prior to re-exec (aw) (aw: rev 
147e020c7aef3ba42eddcef3be1b4ae7c7910371)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


 copy all of test-patch BINDIR prior to re-exec
 --

 Key: HADOOP-12149
 URL: https://issues.apache.org/jira/browse/HADOOP-12149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: 3.0.0, HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-12149.00.patch


 During some tests (e.g., 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7090 ), initial mvn 
 install triggered a full test suite run when Jenkins switches from old 
 test-patch to new test-patch.  This is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1460#comment-1460
 ] 

Hudson commented on HADOOP-12116:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
HADOOP-12116. Fix unrecommended syntax usages in hadoop/hdfs/yarn script for 
cygwin in branch-2. Contributed by Li Lu. (cnauroth: rev 
b8e792cba257fdb0ca266ecb2f60f3f10c3a0c3b)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
 branch-2
 -

 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Li Lu
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HADOOP-12116-branch-2.001.patch


 We're using syntax like if $cygwin; then which may be errorounsly evaluated 
 into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() should always return a sorted list of files

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1461#comment-1461
 ] 

Hudson commented on HADOOP-10798:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
HADOOP-10798. globStatus() should always return a sorted list of files 
(cmccabe) (cmccabe: rev 68e588cbee660d55dba518892d064bee3795a002)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 globStatus() should always return a sorted list of files
 

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Assignee: Colin Patrick McCabe
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10798.001.patch


 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609998#comment-14609998
 ] 

Hudson commented on HADOOP-12124:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/245/])
HADOOP-12124. Add HTrace support for FsShell (cmccabe) (cmccabe: rev 
ad60807238c4f7779cb0685e7d39ca0c50e01b2f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java


 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.8.0

 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12145) Organize and update CodeReviewChecklist wiki

2015-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609903#comment-14609903
 ] 

Steve Loughran commented on HADOOP-12145:
-

..leave as related, because a checklist is part of what we need

 Organize and update CodeReviewChecklist wiki
 

 Key: HADOOP-12145
 URL: https://issues.apache.org/jira/browse/HADOOP-12145
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: 2015_CodeReviewChecklistWiki.001.pdf


 I haven't done too many reviews yet, but I've definitely had a lot of good 
 review from others in the community.
 I've put together a preliminary update with the following things in mind:
 - In the spirit of trying to lower the barrier for new developers, 
 reorganized the document to be a bit more like a checklist
 - Added checklist items that other reviewers have caught in my earlier patch 
 submissions
 - Added more checklist items based on what I've read in past JIRAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12155) NetUtils.wrapExeption to handle SSL exceptions

2015-07-01 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609948#comment-14609948
 ] 

Bibin A Chundatt commented on HADOOP-12155:
---

Hi [~ste...@apache.org] have add patch for the issue .Please provide your 
review comments . 

 NetUtils.wrapExeption to handle SSL exceptions
 --

 Key: HADOOP-12155
 URL: https://issues.apache.org/jira/browse/HADOOP-12155
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Bibin A Chundatt
 Attachments: 0001-HADOOP-12155.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 {{NetUtils.wrapException}} downgrades SSL exceptions  subclasses to IOEs, 
 which surfaces when using it in REST APIs.
 We can look for them specifically and retain the type when wrapping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11981) Add storage policy APIs to filesystem docs

2015-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609975#comment-14609975
 ] 

Steve Loughran commented on HADOOP-11981:
-

It's not yet ready to go in, because it's more javadocs than specification. 
These documents constitute an attempt to define what a filesystem is expected 
to do in way from which we can derive tests and implementations. For that they 
have to be unambiguous and not omit things which were felt to be too obvious 
to mention. Its actually quite hard to do; for the files in there I had to go 
through all the implementations we had and see what they really did ... tagging 
as newbie is actually quite optimistic. 

The storage policy APIs are going to have be another aspect of the filesystem 
model covered in {{model.md}}. that is currently, the FS is
{code}
(Directories:set[Path], Files:[Path:List[byte]], Symlinks:set[Path])
{code}

It's going to have to become something like

{code}
(Directories:set[Path], Files:[Path:List[byte]], Symlinks:set[Path], 
storagePolicies: set[BlockStoragePolicySpi], storagePolicy[Path: String])
{code}

The Xatrrs can go in at the same time.

Operations on the storage policy then become actions which read/write these new 
sets  maps, things we can write down in the python-base notation:

{code}
   getStoragePolicies()

preconditions:
   if FS.storagePolicies=={} raise UnsupportedOperationException

postconditions:

   result = storagePolicies(FS)
{code}

A more interesting one is the setter:

{code}
   setStoragePolicy(src, policyName)

preconditions:
   if FS.storagePolicies=={} raise UnsupportedOperationException
   if [p in FS.storagePolicies where p.getName==policyName] ==[] raise 
HadoopIllegalArgumentException

postconditions:

   FS' = FS where FS'.storagePolicy(src)==policyName
{code}

What this does is try to make things less ambiguous, and so make implementation 
of filesystems and the tests easier. It also means that we can look at the HDFS 
implementation and say whether or not this is what it should be doing.

 For Example
# {{BlockStoragePolicySuite}} is actually using case insensitive checks without 
specifying the locale ... that's exactly the kind of thing we need to be 
defining, and so use it to identify issues like HDFS-8705.
# {{FSDirAttrOp}} will raise an {{IOE}} if the storage policy is disabled. This 
is something which clearly needs specifying
# HDFS implies the caller needs write access to the path. Again, this need to 
be part of the specification.


Accordingly: not yet, and it's going to be harder than you expect. I will help 
review it for you, and help define that extended FS model that's needed.


 Add storage policy APIs to filesystem docs
 --

 Key: HADOOP-11981
 URL: https://issues.apache.org/jira/browse/HADOOP-11981
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
  Labels: newbie
 Attachments: HADOOP-11981.incomplete.01.patch


 HDFS-8345 exposed the storage policy APIs via the FileSystem.
 The FileSystem docs should be updated accordingly.
 https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12169) ListStatus on empty dir in S3A lists itself instead of returning an empty list

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610498#comment-14610498
 ] 

Hadoop QA commented on HADOOP-12169:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 26s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 43s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 19s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-aws. |
| | |  36m 31s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12743074/HADOOP-12169-001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 80a68d6 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7125/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7125/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7125/console |


This message was automatically generated.

 ListStatus on empty dir in S3A lists itself instead of returning an empty list
 --

 Key: HADOOP-12169
 URL: https://issues.apache.org/jira/browse/HADOOP-12169
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Pieter Reuse
Assignee: Pieter Reuse
 Attachments: HADOOP-12169-001.patch


 Upon testing the patch for HADOOP-11918, I stumbled upon a weird behaviour 
 this introduces to the S3AFileSystem-class. Calling ListStatus() on an empty 
 bucket returns an empty list, while doing the same on an empty directory, 
 returns an array of length 1 containing only this directory itself.
 The bugfix is quite simple. In the line of code {code}...if 
 (keyPath.equals(f)...{code} (S3AFileSystem:758), keyPath is qualified wrt. 
 the fs and f is not. Therefore, this returns false while it shouldn't. The 
 bugfix to make f qualified in this line of code.
 More formally: accoring to the formal definition of [The Hadoop FileSystem 
 API 
 Definition|https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/filesystem/],
  more specifically FileSystem.listStatus, only child elements of a directory 
 should be returned upon a listStatus()-call.
 In detail: 
 {code}
 elif isDir(FS, p): result [getFileStatus(c) for c in children(FS, p) where 
 f(c) == True]
 {code}
 and
 {code}
 def children(FS, p) = {q for q in paths(FS) where parent(q) == p}
 {code}
 Which translates to the result of listStatus on an empty directory being an 
 empty list. This is the same behaviour as ls has in Unix, which is what 
 someone would expect from a FileSystem.
 Note: it seemed appropriate to add the test of this patch to the same file as 
 the test for HADOOP-11918, but as a result, one of the two will have to be 
 rebased wrt. the other before being applied to trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12158) Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610544#comment-14610544
 ] 

Hudson commented on HADOOP-12158:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
HADOOP-12158. Improve error message in 
TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed. 
(wang: rev 9ee7b6e6c4ab6bee6304fa7904993c7cbd9a6cd2)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java


 Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when 
 OpenSSL is not installed
 

 Key: HADOOP-12158
 URL: https://issues.apache.org/jira/browse/HADOOP-12158
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.8.0

 Attachments: hadoop-12158.001.patch


 Trivial, rather than throwing an NPE, let's print a nicer error message via 
 an assertNotNull.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12124) Add HTrace support for FsShell

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610531#comment-14610531
 ] 

Hudson commented on HADOOP-12124:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
HADOOP-12124. Add HTrace support for FsShell (cmccabe) (cmccabe: rev 
ad60807238c4f7779cb0685e7d39ca0c50e01b2f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsShell.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Add HTrace support for FsShell
 --

 Key: HADOOP-12124
 URL: https://issues.apache.org/jira/browse/HADOOP-12124
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.8.0

 Attachments: HADOOP-12124.001.patch, HADOOP-12124.002.patch


 Add HTrace support for FsShell



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12149) copy all of test-patch BINDIR prior to re-exec

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610541#comment-14610541
 ] 

Hudson commented on HADOOP-12149:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
HADOOP-12149. copy all of test-patch BINDIR prior to re-exec (aw) (aw: rev 
147e020c7aef3ba42eddcef3be1b4ae7c7910371)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


 copy all of test-patch BINDIR prior to re-exec
 --

 Key: HADOOP-12149
 URL: https://issues.apache.org/jira/browse/HADOOP-12149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: yetus
Affects Versions: 3.0.0, HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-12149.00.patch


 During some tests (e.g., 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/7090 ), initial mvn 
 install triggered a full test suite run when Jenkins switches from old 
 test-patch to new test-patch.  This is bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610543#comment-14610543
 ] 

Hudson commented on HADOOP-12159:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
HADOOP-12159. Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and 
fix for HA namespaces (rchiang via rkanter) (rkanter: rev 
aaafa0b2ee64f6cfb7fdc717500e1c483b9df8cc)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12164) Fix TestMove and TestFsShellReturnCode failed to get command name using reflection.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610539#comment-14610539
 ] 

Hudson commented on HADOOP-12164:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
 HADOOP-12164. Fix TestMove and TestFsShellReturnCode failed to get command 
name using reflection. (Lei Xu) (lei: rev 
532e38cb7f70606c2c96d05259670e1e91d60ab3)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFsShellReturnCode.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestMove.java


 Fix TestMove and TestFsShellReturnCode failed to get command name using 
 reflection.
 ---

 Key: HADOOP-12164
 URL: https://issues.apache.org/jira/browse/HADOOP-12164
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 3.0.0, 2.8.0

 Attachments: HADOOP-12164.000.patch


 When enabled {{hadoop.shell.missing.defaultFs.warning}}, a few tests were 
 failed as following:
 {noformat}
 java.lang.RuntimeException: failed to get .NAME
   at java.lang.Class.getDeclaredField(Class.java:1948)
   at org.apache.hadoop.fs.shell.Command.getCommandField(Command.java:458)
   at org.apache.hadoop.fs.shell.Command.getName(Command.java:401)
   at 
 org.apache.hadoop.fs.shell.FsCommand.getCommandName(FsCommand.java:80)
   at 
 org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:111)
   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
   at 
 org.apache.hadoop.fs.TestFsShellReturnCode.testChgrpGroupValidity(TestFsShellReturnCode.java:434)
 {noformat}
 The reason is that, in {{FsCommand#processRawArguments}}, it uses 
 {{getCommandName()}}, which uses reflection to find {{static String NAME}} 
 field, to build error message. But in the tests, the commands are built 
 without {{static String NAME}} field, since it is not inherited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11644) Contribute CMX compression

2015-07-01 Thread Xabriel J Collazo Mojica (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xabriel J Collazo Mojica updated HADOOP-11644:
--
Status: Patch Available  (was: In Progress)

 Contribute CMX compression
 --

 Key: HADOOP-11644
 URL: https://issues.apache.org/jira/browse/HADOOP-11644
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Xabriel J Collazo Mojica
Assignee: Xabriel J Collazo Mojica
 Attachments: HADOOP-11644.001.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy 
 and ZLIB.
 Each one of these algorithms fills a gap:
 bzip2 : Very high compression ratio, splittable
 LZ4 : Very fast, non splittable
 Snappy : Very fast, non splittable
 zLib : good balance of compression and speed.
 We think there is a gap for a compression algorithm that can perform fast 
 compress and decompress, while also being splittable. This can help 
 significantly on jobs where the input file sizes are = 1GB.
 For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, 
 splittable, concatenable compression algorithm developed specifically for 
 Hadoop workloads. Many of our customers use CMX, and we would love to be able 
 to contribute it to hadoop-common. 
 CMX is block oriented : We typically use 64k blocks. Blocks are independently 
 decompressable.
 CMX is splittable : We implement the SplittableCompressionCodec interface. 
 All CMX files are a multiple of 64k, so the splittability is achieved in a 
 simple way with no need for external indexes.
 CMX is concatenable : Two independent CMX files can be concatenated together. 
 We have seen that some projects like Apache Flume require this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10798) globStatus() should always return a sorted list of files

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610533#comment-14610533
 ] 

Hudson commented on HADOOP-10798:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
HADOOP-10798. globStatus() should always return a sorted list of files 
(cmccabe) (cmccabe: rev 68e588cbee660d55dba518892d064bee3795a002)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/TestGlobPaths.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 globStatus() should always return a sorted list of files
 

 Key: HADOOP-10798
 URL: https://issues.apache.org/jira/browse/HADOOP-10798
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Felix Borchers
Assignee: Colin Patrick McCabe
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10798.001.patch


 (FileSystem) globStatus() does not return a sorted file list anymore.
 But the API says:  ... Results are sorted by their names.
 Seems to be lost, when the Globber Object was introduced. Can't find a sort 
 in actual code.
 code to check this behavior:
 {code}
 Configuration conf = new Configuration();
 FileSystem fs = FileSystem.get(conf);
 Path path = new Path(/tmp/ + System.currentTimeMillis());
 fs.mkdirs(path);
 fs.deleteOnExit(path);
 fs.createNewFile(new Path(path, 2));
 fs.createNewFile(new Path(path, 3));
 fs.createNewFile(new Path(path, 1));
 FileStatus[] status = fs.globStatus(new Path(path, *));
 Collection list = new ArrayList();
 for (FileStatus f: status) {
 list.add(f.getPath().toString());
 //System.out.println(f.getPath().toString());
 }
 boolean sorted = Ordering.natural().isOrdered(list);
 Assert.assertTrue(sorted);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12116) Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in branch-2

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610532#comment-14610532
 ] 

Hudson commented on HADOOP-12116:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2191 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2191/])
HADOOP-12116. Fix unrecommended syntax usages in hadoop/hdfs/yarn script for 
cygwin in branch-2. Contributed by Li Lu. (cnauroth: rev 
b8e792cba257fdb0ca266ecb2f60f3f10c3a0c3b)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix unrecommended syntax usages in hadoop/hdfs/yarn script for cygwin in 
 branch-2
 -

 Key: HADOOP-12116
 URL: https://issues.apache.org/jira/browse/HADOOP-12116
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Reporter: Li Lu
Assignee: Li Lu
 Fix For: 2.8.0

 Attachments: HADOOP-12116-branch-2.001.patch


 We're using syntax like if $cygwin; then which may be errorounsly evaluated 
 into true if cygwin is unset. We need to fix this in branch-2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12114) Make hadoop-tools/hadoop-pipes Native code -Wall-clean

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610426#comment-14610426
 ] 

Hadoop QA commented on HADOOP-12114:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 13s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 35s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 30s | The patch built with 
eclipse:eclipse. |
| | |  15m 13s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12743072/HADOOP-12114.002.patch 
|
| Optional Tests | javac unit |
| git revision | trunk / 80a68d6 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7124/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7124/console |


This message was automatically generated.

 Make hadoop-tools/hadoop-pipes Native code -Wall-clean
 --

 Key: HADOOP-12114
 URL: https://issues.apache.org/jira/browse/HADOOP-12114
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Attachments: HADOOP-12114.001.patch, HADOOP-12114.002.patch


 As we specify -Wall as a default compilation flag, it would be helpful if the 
 Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12112) Make hadoop-common-project Native code -Wall-clean

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610368#comment-14610368
 ] 

Hudson commented on HADOOP-12112:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #233 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/233/])
HADOOP-12112. Make hadoop-common-project Native code -Wall-clean (alanburlison 
via cmccabe) (cmccabe: rev fad291ea6dbe49782e33a32cd6608088951e2c58)
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/io/nativeio/NativeIO.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/net/unix/DomainSocket.c
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/NativeCrc32.c
* hadoop-common-project/hadoop-common/CHANGES.txt


 Make hadoop-common-project Native code -Wall-clean
 --

 Key: HADOOP-12112
 URL: https://issues.apache.org/jira/browse/HADOOP-12112
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 2.7.0
Reporter: Alan Burlison
Assignee: Alan Burlison
 Fix For: 2.8.0

 Attachments: HADOOP-12112.001.patch


 As we specify -Wall as a default compilation flag, it would be helpful if the 
 Native code was -Wall-clean



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11918) Listing an empty s3a root directory throws FileNotFound.

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610507#comment-14610507
 ] 

Hadoop QA commented on HADOOP-11918:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 24s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 36s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 20s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 46s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-aws. |
| | |  36m 32s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12740396/HADOOP-11918-002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 80a68d6 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7126/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7126/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7126/console |


This message was automatically generated.

 Listing an empty s3a root directory throws FileNotFound.
 

 Key: HADOOP-11918
 URL: https://issues.apache.org/jira/browse/HADOOP-11918
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
  Labels: BB2015-05-TBR, s3
 Attachments: HADOOP-11918-002.patch, HADOOP-11918.000.patch, 
 HADOOP-11918.001.patch


 With an empty s3 bucket and run
 {code}
 $ hadoop fs -D... -ls s3a://hdfs-s3a-test/
 15/05/04 15:21:34 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 ls: `s3a://hdfs-s3a-test/': No such file or directory
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11974) FIONREAD is not always in the same header file

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610512#comment-14610512
 ] 

Hadoop QA commented on HADOOP-11974:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 14s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  22m  9s | Tests passed in 
hadoop-common. |
| | |  37m 20s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741633/HADOOP-11974.001.patch 
|
| Optional Tests | javac unit |
| git revision | trunk / 80a68d6 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7127/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7127/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7127/console |


This message was automatically generated.

 FIONREAD is not always in the same header file
 --

 Key: HADOOP-11974
 URL: https://issues.apache.org/jira/browse/HADOOP-11974
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.7.0
 Environment: Solaris
Reporter: Alan Burlison
Assignee: Alan Burlison
Priority: Minor
 Attachments: HADOOP-11974.001.patch


 The FIONREAD macro is found in sys/ioctl.h on Linux and sys/filio.h on 
 Solaris. A conditional include block is required to make sure it is looked 
 for in the right place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11644) Contribute CMX compression

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610547#comment-14610547
 ] 

Hadoop QA commented on HADOOP-11644:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12741747/HADOOP-11644.001.patch 
|
| Optional Tests | shellcheck javadoc javac unit findbugs checkstyle |
| git revision | trunk / 2ac87df |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7128/console |


This message was automatically generated.

 Contribute CMX compression
 --

 Key: HADOOP-11644
 URL: https://issues.apache.org/jira/browse/HADOOP-11644
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Xabriel J Collazo Mojica
Assignee: Xabriel J Collazo Mojica
 Attachments: HADOOP-11644.001.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 Hadoop natively supports four main compression algorithms: BZIP2, LZ4, Snappy 
 and ZLIB.
 Each one of these algorithms fills a gap:
 bzip2 : Very high compression ratio, splittable
 LZ4 : Very fast, non splittable
 Snappy : Very fast, non splittable
 zLib : good balance of compression and speed.
 We think there is a gap for a compression algorithm that can perform fast 
 compress and decompress, while also being splittable. This can help 
 significantly on jobs where the input file sizes are = 1GB.
 For this, IBM has developed CMX. CMX is a dictionary-based, block-oriented, 
 splittable, concatenable compression algorithm developed specifically for 
 Hadoop workloads. Many of our customers use CMX, and we would love to be able 
 to contribute it to hadoop-common. 
 CMX is block oriented : We typically use 64k blocks. Blocks are independently 
 decompressable.
 CMX is splittable : We implement the SplittableCompressionCodec interface. 
 All CMX files are a multiple of 64k, so the splittability is achieved in a 
 simple way with no need for external indexes.
 CMX is concatenable : Two independent CMX files can be concatenated together. 
 We have seen that some projects like Apache Flume require this feature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610853#comment-14610853
 ] 

Colin Patrick McCabe commented on HADOOP-12171:
---

bq. fullClassNameToTraceString looks like utility that belongs in htrace rather 
than in hadoop rpc util. Could add it here for now deprecated to be replaced 
with htrace implementation.

yeah

bq. Otherwise, LGTM Colin Patrick McCabe

thanks, waiting for jenkins

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610854#comment-14610854
 ] 

Colin Patrick McCabe commented on HADOOP-12171:
---

let me see if I can change fullClassNameToTraceString - toTraceName

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12171:
--
Attachment: HADOOP-12171.002.patch

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch, HADOOP-12171.002.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610828#comment-14610828
 ] 

stack commented on HADOOP-12171:


fullClassNameToTraceString looks like utility that belongs in htrace rather 
than in hadoop rpc util. Could add it here for now deprecated to be replaced 
with htrace implementation.

Call it toTraceString or toTraceName or toTraceKey ... since what is passed in 
is not classname, we do more than just shorten the passed String, and our 
output is used keying the trace.

Otherwise, LGTM [~cmccabe]







 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-07-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610892#comment-14610892
 ] 

Allen Wittenauer commented on HADOOP-12111:
---

A super-complex test case:  
https://issues.apache.org/jira/browse/HADOOP-11984?focusedCommentId=14566293page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14566293

I don't know if I can list all the things that the patch actually tests because 
it hits so many subsystems.  Highlights are that it used multijdk, uses 
parallel test execution,  and extra native compile time flags that are specific 
to Hadoop.

Two interesting bugs highlighted:
* checkstyle output can be multi-line, which makes it tricky to handle when 
doing before/after comparisons like we want to do.  (cc: [~romani]) 
* findbugs convertXmlToText is only failing for one of the modules and we do a 
pretty bad job of giving any hints as to why.   Filed HADOOP-12166 to see if we 
can do better.
* I'm assuming the mvn install convergence errors are just typical maven 
stupidity w/multiple writers. [HADOOP-12146 should alleviate a lot of that.]
* Hadoop-specific: we might need to bump up the unit test heap for JDK8. :(

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-07-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610892#comment-14610892
 ] 

Allen Wittenauer edited comment on HADOOP-12111 at 7/1/15 7:41 PM:
---

A super-complex test case:  
https://issues.apache.org/jira/browse/HADOOP-11984?focusedCommentId=14566293page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14566293

I don't know if I can list all the things that the patch actually tests because 
it hits so many subsystems.  Highlights are that it used multijdk, used 
parallel test execution,  and some extra native compile time flags that are 
specific to Hadoop.

Two interesting bugs and some other bits from the output highlighted:
* checkstyle output can be multi-line, which makes it tricky to handle when 
doing before/after comparisons like we want to do.  (cc: [~romani]) 
* findbugs convertXmlToText is only failing for one of the modules and we do a 
pretty bad job of giving any hints as to why.   Filed HADOOP-12166 to see if we 
can do better.
* I'm assuming the mvn install convergence errors are just typical maven 
stupidity w/multiple writers. [HADOOP-12146 should alleviate a lot of that.]
* Hadoop-specific: we might need to bump up the unit test heap for JDK8. :(


was (Author: aw):
A super-complex test case:  
https://issues.apache.org/jira/browse/HADOOP-11984?focusedCommentId=14566293page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14566293

I don't know if I can list all the things that the patch actually tests because 
it hits so many subsystems.  Highlights are that it used multijdk, uses 
parallel test execution,  and extra native compile time flags that are specific 
to Hadoop.

Two interesting bugs highlighted:
* checkstyle output can be multi-line, which makes it tricky to handle when 
doing before/after comparisons like we want to do.  (cc: [~romani]) 
* findbugs convertXmlToText is only failing for one of the modules and we do a 
pretty bad job of giving any hints as to why.   Filed HADOOP-12166 to see if we 
can do better.
* I'm assuming the mvn install convergence errors are just typical maven 
stupidity w/multiple writers. [HADOOP-12146 should alleviate a lot of that.]
* Hadoop-specific: we might need to bump up the unit test heap for JDK8. :(

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12111) [Umbrella] Split test-patch off into its own TLP

2015-07-01 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610892#comment-14610892
 ] 

Allen Wittenauer edited comment on HADOOP-12111 at 7/1/15 7:43 PM:
---

A super-complex test case:  
https://issues.apache.org/jira/browse/HADOOP-11984?focusedCommentId=14609465page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14609465

I don't know if I can list all the things that the patch actually tests because 
it hits so many subsystems.  Highlights are that it used multijdk, used 
parallel test execution,  and some extra native compile time flags that are 
specific to Hadoop.

Two interesting bugs and some other bits from the output highlighted:
* checkstyle output can be multi-line, which makes it tricky to handle when 
doing before/after comparisons like we want to do.  (cc: [~romani]) 
* findbugs convertXmlToText is only failing for one of the modules and we do a 
pretty bad job of giving any hints as to why.   Filed HADOOP-12166 to see if we 
can do better.
* I'm assuming the mvn install convergence errors are just typical maven 
stupidity w/multiple writers. [HADOOP-12146 should alleviate a lot of that.]
* Hadoop-specific: we might need to bump up the unit test heap for JDK8. :(


was (Author: aw):
A super-complex test case:  
https://issues.apache.org/jira/browse/HADOOP-11984?focusedCommentId=14566293page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14566293

I don't know if I can list all the things that the patch actually tests because 
it hits so many subsystems.  Highlights are that it used multijdk, used 
parallel test execution,  and some extra native compile time flags that are 
specific to Hadoop.

Two interesting bugs and some other bits from the output highlighted:
* checkstyle output can be multi-line, which makes it tricky to handle when 
doing before/after comparisons like we want to do.  (cc: [~romani]) 
* findbugs convertXmlToText is only failing for one of the modules and we do a 
pretty bad job of giving any hints as to why.   Filed HADOOP-12166 to see if we 
can do better.
* I'm assuming the mvn install convergence errors are just typical maven 
stupidity w/multiple writers. [HADOOP-12146 should alleviate a lot of that.]
* Hadoop-specific: we might need to bump up the unit test heap for JDK8. :(

 [Umbrella] Split test-patch off into its own TLP
 

 Key: HADOOP-12111
 URL: https://issues.apache.org/jira/browse/HADOOP-12111
 Project: Hadoop Common
  Issue Type: New Feature
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer

 Given test-patch's tendency to get forked into a variety of different 
 projects, it makes a lot of sense to make an Apache TLP so that everyone can 
 benefit from a common code base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2015-07-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-11229:

Status: Open  (was: Patch Available)

 JobStoryProducer is not closed upon return from 
 Gridmix#setupDistCacheEmulation()
 -

 Key: HADOOP-11229
 URL: https://issues.apache.org/jira/browse/HADOOP-11229
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: skrho
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-11229_001.patch, HADOOP-11229_002.patch


 Here is related code:
 {code}
   JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
   exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
 {code}
 jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610672#comment-14610672
 ] 

Ray Chiang commented on HADOOP-12159:
-

Thanks for the speedy review and commit!

 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12145) Organize and update CodeReviewChecklist wiki

2015-07-01 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610678#comment-14610678
 ] 

Ray Chiang commented on HADOOP-12145:
-

Add link to Coding Style JIRA

 Organize and update CodeReviewChecklist wiki
 

 Key: HADOOP-12145
 URL: https://issues.apache.org/jira/browse/HADOOP-12145
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: 2015_CodeReviewChecklistWiki.001.pdf


 I haven't done too many reviews yet, but I've definitely had a lot of good 
 review from others in the community.
 I've put together a preliminary update with the following things in mind:
 - In the spirit of trying to lower the barrier for new developers, 
 reorganized the document to be a bit more like a checklist
 - Added checklist items that other reviewers have caught in my earlier patch 
 submissions
 - Added more checklist items based on what I've read in past JIRAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12171:
-

 Summary: Shorten overly-long htrace span names for server
 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Shorten overly-long htrace span names for the server.  For example, 
{{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
{{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12171:
--
Status: Patch Available  (was: Open)

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12158) Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610575#comment-14610575
 ] 

Hudson commented on HADOOP-12158:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
HADOOP-12158. Improve error message in 
TestCryptoStreamsWithOpensslAesCtrCryptoCodec when OpenSSL is not installed. 
(wang: rev 9ee7b6e6c4ab6bee6304fa7904993c7cbd9a6cd2)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve error message in TestCryptoStreamsWithOpensslAesCtrCryptoCodec when 
 OpenSSL is not installed
 

 Key: HADOOP-12158
 URL: https://issues.apache.org/jira/browse/HADOOP-12158
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Affects Versions: 2.6.0
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Trivial
 Fix For: 2.8.0

 Attachments: hadoop-12158.001.patch


 Trivial, rather than throwing an NPE, let's print a nicer error message via 
 an assertNotNull.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12159) Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA namespaces

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610574#comment-14610574
 ] 

Hudson commented on HADOOP-12159:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #243 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/243/])
HADOOP-12159. Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and 
fix for HA namespaces (rchiang via rkanter) (rkanter: rev 
aaafa0b2ee64f6cfb7fdc717500e1c483b9df8cc)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


 Move DistCpUtils#compareFs() to org.apache.hadoop.fs.FileUtil and fix for HA 
 namespaces
 ---

 Key: HADOOP-12159
 URL: https://issues.apache.org/jira/browse/HADOOP-12159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
 Fix For: 2.8.0

 Attachments: HADOOP-12159.001.patch


 Move DistCpUtils#compareFs() duplicates functionality with 
 JobResourceUploader#compareFs().  These should be moved to a common area with 
 unit testing.
 Initial suggested place to move it to would be org.apache.hadoop.fs.FileUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12145) Organize and update CodeReviewChecklist wiki

2015-07-01 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14610679#comment-14610679
 ] 

Ray Chiang commented on HADOOP-12145:
-

Done.  Thanks.

 Organize and update CodeReviewChecklist wiki
 

 Key: HADOOP-12145
 URL: https://issues.apache.org/jira/browse/HADOOP-12145
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
 Attachments: 2015_CodeReviewChecklistWiki.001.pdf


 I haven't done too many reviews yet, but I've definitely had a lot of good 
 review from others in the community.
 I've put together a preliminary update with the following things in mind:
 - In the spirit of trying to lower the barrier for new developers, 
 reorganized the document to be a bit more like a checklist
 - Added checklist items that other reviewers have caught in my earlier patch 
 submissions
 - Added more checklist items based on what I've read in past JIRAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12171) Shorten overly-long htrace span names for server

2015-07-01 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-12171:
--
Attachment: HADOOP-12171.001.patch

 Shorten overly-long htrace span names for server
 

 Key: HADOOP-12171
 URL: https://issues.apache.org/jira/browse/HADOOP-12171
 Project: Hadoop Common
  Issue Type: Bug
  Components: tracing
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-12171.001.patch


 Shorten overly-long htrace span names for the server.  For example, 
 {{org.apache.hadoop.hdfs.protocol.ClientProtocol.create}} should be 
 {{ClientProtocol#create}} instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12155) NetUtils.wrapExeption to handle SSL exceptions

2015-07-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated HADOOP-12155:
--
Status: Patch Available  (was: Open)

 NetUtils.wrapExeption to handle SSL exceptions
 --

 Key: HADOOP-12155
 URL: https://issues.apache.org/jira/browse/HADOOP-12155
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Bibin A Chundatt
 Attachments: 0001-HADOOP-12155.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 {{NetUtils.wrapException}} downgrades SSL exceptions  subclasses to IOEs, 
 which surfaces when using it in REST APIs.
 We can look for them specifically and retain the type when wrapping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12155) NetUtils.wrapExeption to handle SSL exceptions

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14609682#comment-14609682
 ] 

Hadoop QA commented on HADOOP-12155:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 39s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 29s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  3s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 37s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  21m 59s | Tests passed in 
hadoop-common. |
| | |  60m 56s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12742826/0001-HADOOP-12155.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7405c59 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7123/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7123/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7123/console |


This message was automatically generated.

 NetUtils.wrapExeption to handle SSL exceptions
 --

 Key: HADOOP-12155
 URL: https://issues.apache.org/jira/browse/HADOOP-12155
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: net
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Bibin A Chundatt
 Attachments: 0001-HADOOP-12155.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 {{NetUtils.wrapException}} downgrades SSL exceptions  subclasses to IOEs, 
 which surfaces when using it in REST APIs.
 We can look for them specifically and retain the type when wrapping



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12172) FsShell mkdir -p makes an unnecessary check for the existence of the parent.

2015-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611406#comment-14611406
 ] 

Hudson commented on HADOOP-12172:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8110 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8110/])
HADOOP-12172. FsShell mkdir -p makes an unnecessary check for the existence of 
the parent. Contributed by Chris Nauroth. (cnauroth: rev 
f3796224bfdfd88e2428cc8a9915bdfdc62b48f3)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java


 FsShell mkdir -p makes an unnecessary check for the existence of the parent.
 

 Key: HADOOP-12172
 URL: https://issues.apache.org/jira/browse/HADOOP-12172
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12172.001.patch


 The {{mkdir}} command in {{FsShell}} checks for the existence of the parent 
 of the directory and returns an error if it doesn't exist.  The {{-p}} option 
 suppresses the error and allows the directory creation to continue, 
 implicitly creating all missing intermediate directories.  However, the 
 existence check still runs even with {{-p}} specified, and its result is 
 ignored.  Depending on the file system, this is a wasteful RPC call (HDFS) or 
 HTTP request (WebHDFS/S3/Azure) imposing extra latency for the client and 
 extra load for the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-692) Rack-aware Replica Placement

2015-07-01 Thread shifeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611415#comment-14611415
 ] 

shifeng commented on HADOOP-692:


I have a questions:
 Setting cluster topology map has to match physical network topology,dosen't it?
 I want to build the four level of physical network 
topology(/CoreDC/DC/rack/host)or more level. Does Rack aware has support ?

Is this question resolved?

 Rack-aware Replica Placement
 

 Key: HADOOP-692
 URL: https://issues.apache.org/jira/browse/HADOOP-692
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.10.1
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.11.0

 Attachments: Rack_aware_HDFS_proposal.pdf, rack.patch


 This issue assumes that HDFS runs on a cluster of computers that spread 
 across many racks. Communication between two nodes on different racks needs 
 to go through switches. Bandwidth in/out of a rack may be less than the total 
 bandwidth of machines in the rack. The purpose of rack-aware replica 
 placement is to improve data reliability, availability, and network bandwidth 
 utilization. The basic idea is that each data node determines to which rack 
 it belongs at the startup time and notifies the name node of the rack id upon 
 registration. The name node maintains a rackid-to-datanode map and tries to 
 place replicas across racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-692) Rack-aware Replica Placement

2015-07-01 Thread shifeng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611413#comment-14611413
 ] 

shifeng commented on HADOOP-692:


is this question resolved?
I searched but found no answer.

 Rack-aware Replica Placement
 

 Key: HADOOP-692
 URL: https://issues.apache.org/jira/browse/HADOOP-692
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.10.1
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.11.0

 Attachments: Rack_aware_HDFS_proposal.pdf, rack.patch


 This issue assumes that HDFS runs on a cluster of computers that spread 
 across many racks. Communication between two nodes on different racks needs 
 to go through switches. Bandwidth in/out of a rack may be less than the total 
 bandwidth of machines in the rack. The purpose of rack-aware replica 
 placement is to improve data reliability, availability, and network bandwidth 
 utilization. The basic idea is that each data node determines to which rack 
 it belongs at the startup time and notifies the name node of the rack id upon 
 registration. The name node maintains a rackid-to-datanode map and tries to 
 place replicas across racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12174) compute a code quality index

2015-07-01 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12174:
-

 Summary: compute a code quality index
 Key: HADOOP-12174
 URL: https://issues.apache.org/jira/browse/HADOOP-12174
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Priority: Minor


Just an idea:

Compute a code quality index based upon checkstyle, findbugs, and shellcheck 
but with all exclusions and disables turned off.  Generate a number pre-patch 
and post-patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12172) FsShell mkdir -p makes an unnecessary check for the existence of the parent.

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611318#comment-14611318
 ] 

Hadoop QA commented on HADOOP-12172:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 30s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 34s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  2s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 50s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m  9s | Tests passed in 
hadoop-common. |
| | |  61m 18s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12743169/HADOOP-12172.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 0e4b066 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7131/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7131/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7131/console |


This message was automatically generated.

 FsShell mkdir -p makes an unnecessary check for the existence of the parent.
 

 Key: HADOOP-12172
 URL: https://issues.apache.org/jira/browse/HADOOP-12172
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-12172.001.patch


 The {{mkdir}} command in {{FsShell}} checks for the existence of the parent 
 of the directory and returns an error if it doesn't exist.  The {{-p}} option 
 suppresses the error and allows the directory creation to continue, 
 implicitly creating all missing intermediate directories.  However, the 
 existence check still runs even with {{-p}} specified, and its result is 
 ignored.  Depending on the file system, this is a wasteful RPC call (HDFS) or 
 HTTP request (WebHDFS/S3/Azure) imposing extra latency for the client and 
 extra load for the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12173) NetworkTopology#add calls NetworkTopology#toString always

2015-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611348#comment-14611348
 ] 

Hadoop QA commented on HADOOP-12173:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 41s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 35s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  5s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 42s | Tests passed in 
hadoop-common. |
| | |  62m  7s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12743181/HADOOP-12173-v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a78d507 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7132/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7132/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7132/console |


This message was automatically generated.

 NetworkTopology#add calls NetworkTopology#toString always
 -

 Key: HADOOP-12173
 URL: https://issues.apache.org/jira/browse/HADOOP-12173
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Inigo Goiri
Assignee: Inigo Goiri
 Fix For: 2.7.1

 Attachments: HADOOP-12173-v1.patch


 It always does a toString of the whole topology but this is not required when 
 there are no errors. This is adding a very big overhead to large clusters as 
 it's walking the whole tree every time we add a node to the cluster.
 HADOOP-10953 did some fix in that area but the errors is still there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12172) FsShell mkdir -p makes an unnecessary check for the existence of the parent.

2015-07-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14611352#comment-14611352
 ] 

Brahma Reddy Battula commented on HADOOP-12172:
---

Nice Catch [~cnauroth],,+1 (non-binding).

 FsShell mkdir -p makes an unnecessary check for the existence of the parent.
 

 Key: HADOOP-12172
 URL: https://issues.apache.org/jira/browse/HADOOP-12172
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Attachments: HADOOP-12172.001.patch


 The {{mkdir}} command in {{FsShell}} checks for the existence of the parent 
 of the directory and returns an error if it doesn't exist.  The {{-p}} option 
 suppresses the error and allows the directory creation to continue, 
 implicitly creating all missing intermediate directories.  However, the 
 existence check still runs even with {{-p}} specified, and its result is 
 ignored.  Depending on the file system, this is a wasteful RPC call (HDFS) or 
 HTTP request (WebHDFS/S3/Azure) imposing extra latency for the client and 
 extra load for the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >