[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-25 Thread Jagdish Kewat (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168618#comment-15168618
 ] 

Jagdish Kewat commented on HADOOP-12837:


Hi [~cnauroth],

I have a path filter utility which takes Path as input and returns true if the 
modification time of the give path is less than a specified time. Here's a 
method snippet for reference.
{code}
  @Override
  public boolean accept(Path path) {
try {
  FileStatus fs = filesystem.getFileStatus(path);
  if (fs.getModificationTime() < this.date.getMillis()) {
return true;
  }
} catch (IOException e) {
  LOG.error(e.getMessage());
}
return false;
  }
{code}

The actual job takes all the paths for whom this returns true. Since the 
modification time for S3 based paths is returned as 0 this method returns true 
for all the paths specified. This results in processing unwanted data. This job 
doesn't fail. It just produces undesired output.

Besides I have a use case where we create a backup of the directories by 
renaming them with the timestamp of the modification time.
Also here the *filesystem* could be S3 or HDFS so need to find a generic 
solution.

A probably workaround I can think of is writing some dummy file like _SUCCESS 
in each of these directories and then look for modification time of the file, 
however, that would be an added effort.

Thanks,
Jagdish
 

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2016-02-25 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12552:
--
Attachment: HADOOP-12552.002.patch

I updated the patch. Since HADOOP-11613 came in, we don't need to fix 
hadoop-azure/pom.xml.

HADOOP-11614 seems to need more time. I would like to fix the dependency of 
hadoop-common first. We can update hadoop-openstack/pom.xml later in 
HADOOP-11614.


> Fix undeclared/unused dependency to httpclient
> --
>
> Key: HADOOP-12552
> URL: https://issues.apache.org/jira/browse/HADOOP-12552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: incompatible
> Attachments: HADOOP-12552.001.patch, HADOOP-12552.002.patch
>
>
> hadoop-common has undeclared dependency on 
> {{org.apache.httpcomponents:httpclient}} and unused dependency on 
> {{commons-httpclient:commons-httpclient}}. Vise versa in hadoop-azure and 
> hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12711:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Committed to branch-2 and branch-2.8. Thanks, [~jojochuang]! 

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12711-branch-2.002.patch, 
> HADOOP-12711-branch-2.003.patch, HADOOP-12711.001.patch, 
> HADOOP-12711.002.patch, HADOOP-12711.branch-2.004.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-02-25 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12843:
---
Attachment: findbugsHtml.html

> Fix findbugs warnings in hadoop-common (branch-2)
> -
>
> Key: HADOOP-12843
> URL: https://issues.apache.org/jira/browse/HADOOP-12843
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>  Labels: newbie
> Attachments: findbugsHtml.html
>
>
> There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12843) Fix findbugs warnings in hadoop-common (branch-2)

2016-02-25 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12843:
--

 Summary: Fix findbugs warnings in hadoop-common (branch-2)
 Key: HADOOP-12843
 URL: https://issues.apache.org/jira/browse/HADOOP-12843
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira AJISAKA


There are 5 findbugs warnings in branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12793) Write a new group mapping service guide

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168514#comment-15168514
 ] 

Hadoop QA commented on HADOOP-12793:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 2s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 41s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 57s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 15s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 51m 33s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 13s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 

[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168444#comment-15168444
 ] 

Masatake Iwasaki commented on HADOOP-12711:
---

+1. I will commit this to branch-2 and branch-2.8.

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12711-branch-2.002.patch, 
> HADOOP-12711-branch-2.003.patch, HADOOP-12711.001.patch, 
> HADOOP-12711.002.patch, HADOOP-12711.branch-2.004.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Plamen Jeliazkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168437#comment-15168437
 ] 

Plamen Jeliazkov commented on HADOOP-12842:
---

[~iwasakims], thanks for the link there.

If that is the intended specification then it is not being enforced; but that 
is a separate issue. Shall I close this JIRA then or re-purpose it?

> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168421#comment-15168421
 ] 

Hadoop QA commented on HADOOP-12711:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
47s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 50s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 53s 
{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 5 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 10 unchanged - 4 fixed = 10 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 1s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 46s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790047/HADOOP-12711.branch-2.004.patch
 |
| JIRA Issue | HADOOP-12711 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | 

[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2016-02-25 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168321#comment-15168321
 ] 

Kai Zheng commented on HADOOP-11996:


Thanks a lot, [~cmccabe] and [~zhz], for the review and great comments!! 
Appreciate the help for your much time to sort this out. The reorganizing of 
the two patches and the code structures like above suggested make sense to me 
and I will do that.

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168318#comment-15168318
 ] 

Hadoop QA commented on HADOOP-12842:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 52s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 1s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790034/HADOOP-12842_trunk.patch
 |
| JIRA Issue | HADOOP-12842 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 15e6a878ec69 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d7fdec1 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168305#comment-15168305
 ] 

Masatake Iwasaki commented on HADOOP-12842:
---

bq. In most FileSystems you can create a file with a colon character in it

[Specification|https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/model.html]
 says that ':' is not allowed to be in path element.

{noformat}
Path elements MUST NOT contain the characters {'/', ':'}.
{noformat}


> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168262#comment-15168262
 ] 

Wei-Chiu Chuang commented on HADOOP-12767:
--

Hi [~artem.aliev] thanks for the work!
Please feel free to assign this JIRA to yourself. I'll review for you.

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12711:
-
Attachment: HADOOP-12711.branch-2.004.patch

Thanks [~iwasakims] for review!
There were many warnings in the last patch, but only this one is related. 
Submitting rev4 for check. This patch uses {{URLEncode#encode(s,enc)}}.

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12711-branch-2.002.patch, 
> HADOOP-12711-branch-2.003.patch, HADOOP-12711.001.patch, 
> HADOOP-12711.002.patch, HADOOP-12711.branch-2.004.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12793) Write a new group mapping service guide

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12793:
-
Attachment: HADOOP-12793.003.patch

Rev03: wrote SSL and POSIX groups in LDAP group name resolution. 

> Write a new group mapping service guide
> ---
>
> Key: HADOOP-12793
> URL: https://issues.apache.org/jira/browse/HADOOP-12793
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: ldap, supportability
> Attachments: HADOOP-12791.001.patch, HADOOP-12793.002.patch, 
> HADOOP-12793.003.patch
>
>
> LdapGroupsMapping has lots of configurable properties and is thus fairly 
> complex in nature. _HDFS Permissions Guide_ has a minimal introduction to 
> LdapGroupsMapping, with reference to "More information on configuring the 
> group mapping service is available in the Javadocs."
> However, its Javadoc provides no information about how to configure it. 
> Core-default.xml has descriptions for each property, but still lacks a 
> comprehensive tutorial. Without a tutorial/guide, these configurable 
> properties would be buried under the sea of properties.
> Both Cloudera and HortonWorks has some information regarding LDAP group 
> mapping:
> http://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_ldap_grp_mappings.html
> http://hortonworks.com/blog/hadoop-groupmapping-ldap-integration/
> But neither cover all configurable features, such as using SSL with LDAP, and 
> POSIX group semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node running Windows

2016-02-25 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168243#comment-15168243
 ] 

Inigo Goiri commented on HADOOP-12824:
--

Thank you very much [~xyao] for the review and the commit!

> Collect network and disk usage on the node running Windows
> --
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12842 started by Plamen Jeliazkov.
-
> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HADOOP-12842:
--
Attachment: HADOOP-12842_trunk.patch

> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Plamen Jeliazkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HADOOP-12842:
--
Status: Patch Available  (was: In Progress)

> LocalFileSystem checksum file creation fails when source filename contains a 
> colon
> --
>
> Key: HADOOP-12842
> URL: https://issues.apache.org/jira/browse/HADOOP-12842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Plamen Jeliazkov
>Assignee: Plamen Jeliazkov
>Priority: Minor
> Attachments: HADOOP-12842_trunk.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> In most FileSystems you can create a file with a colon character in it, 
> including HDFS. If you try to use the LocalFileSystem implementation (which 
> extends ChecksumFileSystem) to create a file with a colon character in it you 
> get a URISyntaxException during the creation of the checksum file because of 
> the use of {code}new Path(path, checksumFile){code} where checksumFile will 
> be considered as a relative path during URI parsing due to starting with a 
> "." and containing a ":" in the path.  
> Running the following test inside TestLocalFileSystem causes the failure:
> {code}
> @Test
>   public void testColonFilePath() throws Exception {
> FileSystem fs = fileSys;
> Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
> fs.delete(file, true);
> FSDataOutputStream out = fs.create(file);
> try {
>   out.write("text1".getBytes());
> } finally {
>   out.close();
> }
> }
> {code}
> With the following stack trace:
> {code}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: .fileWith:InIt.crc
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:201)
>   at org.apache.hadoop.fs.Path.(Path.java:170)
>   at org.apache.hadoop.fs.Path.(Path.java:92)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
>   at 
> org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168189#comment-15168189
 ] 

Wei-Chiu Chuang commented on HADOOP-12841:
--

test failures are unrelated.

> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated 
> {{fs.s3a.awsAccessKeyId}}/{{fs.s3a.awsSecretAccessKey}} in favor of 
> {{fs.s3a.access.key}}/{{fs.s3a.secret.key}} in the code, but did not update 
> core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12841:
-
Description: 
HADOOP-11670 deprecated {{fs.s3a.awsAccessKeyId}}/{{fs.s3a.awsSecretAccessKey}} 
in favor of {{fs.s3a.access.key}}/{{fs.s3a.secret.key}} in the code, but did 
not update core-default.xml. Also, a few S3 related properties are missing.



  was:
HADOOP-11670 deprecated {fs.s3a.awsAccessKeyId}/{fs.s3a.awsSecretAccessKey} in 
favor of {fs.s3a.access.key}/{fs.s3a.secret.key} in the code, but did not 
update core-default.xml. Also, a few S3 related properties are missing.




> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated 
> {{fs.s3a.awsAccessKeyId}}/{{fs.s3a.awsSecretAccessKey}} in favor of 
> {{fs.s3a.access.key}}/{{fs.s3a.secret.key}} in the code, but did not update 
> core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11996) Native erasure coder facilities based on ISA-L

2016-02-25 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168148#comment-15168148
 ] 

Colin Patrick McCabe commented on HADOOP-11996:
---

Hi [~drankye],

Sorry for the hiatus.  I was on vacation, and after that I had to do some other 
things.

I took a look at this again today.  I think there have been a lot of positive 
changes... avoiding duplication of the unit test is a good thing, for example.

However, I think the code needs some more structure.  Right now, it is hard to 
see the connection between source code names and what is there.  For example, 
why is {{erasure_coder.c}} not in the coder directory?  Why is 
{{erasure_code_native.c}} in the {{coder}} directory, and why doesn't it have 
anything to do with coders?  Why is the source for the functions described in 
{{gf_util.h}} in a source file named {{erasure_code.c}}?  I also see that there 
are some other native code changes in HADOOP-11540, like the addition of a 
{{coder_util.c}} file.

To move forward with this, I suggest:
* Let's put all the native changes needed to get ISAL to work into this patch.  
The natural patch split is Java code in HADOOP-11540, and C code in this JIRA, 
so let's do that.  Let's see all the C code in this patch.
* Get rid of the {{hadoop/io/erasurecode/coder}} directory.  We should be able 
to put all the code in {{hadoop/io/erasurecode}}; everything in this directory 
should serve the cause of erasure encoding and decoding, so there is no need 
for a separate directory.
* Put the functions described in {{gf_util.h}} in a source code file named 
{{gf_util.c}}, not in {{erasure_code.c}}
* Functions like dump, dumpMatrix, dumpCodingMatrix, etc. belong in a utility 
file named something like {{dump.c}}
* Rename {{CoderState}} to {{IsalEncoder}}.  Rename {{DecoderState}} to 
{{IsalDecoder}}.

Thanks, [~drankye] and [~zhz]

> Native erasure coder facilities based on ISA-L
> --
>
> Key: HADOOP-11996
> URL: https://issues.apache.org/jira/browse/HADOOP-11996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11996-initial.patch, HADOOP-11996-v2.patch, 
> HADOOP-11996-v3.patch, HADOOP-11996-v4.patch, HADOOP-11996-v5.patch, 
> HADOOP-11996-v6.patch, HADOOP-11996-v7.patch
>
>
> While working on HADOOP-11540 and etc., it was found useful to write the 
> basic facilities based on Intel ISA-L library separately from JNI stuff. It's 
> also easy to debug and troubleshooting, as no JNI or Java stuffs are involved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node running Windows

2016-02-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168146#comment-15168146
 ] 

Hudson commented on HADOOP-12824:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9371 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9371/])
HADOOP-12824. Collect network and disk usage on the node running (xyao: rev 
b2951f9fbccee8aeab04c1f5ee3fc6db1ef6b2da)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestSysInfoWindows.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SysInfoWindows.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/systeminfo.c


> Collect network and disk usage on the node running Windows
> --
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node running Windows

2016-02-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12824:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~elgoiri] for the contribution. I've commit the patch to trunk, 
branch-2 and branch-2.8.


> Collect network and disk usage on the node running Windows
> --
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Fix For: 2.8.0
>
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node running Windows

2016-02-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12824:

Summary: Collect network and disk usage on the node running Windows  (was: 
Collect network and disk usage on the node in Windows)

> Collect network and disk usage on the node running Windows
> --
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-25 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168111#comment-15168111
 ] 

Xiaoyu Yao commented on HADOOP-12824:
-

+1 for patch v004, I will commit it shortly.

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8448) Java options being duplicated several times

2016-02-25 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-8448:
-
Assignee: (was: Namit Jain)

> Java options being duplicated several times
> ---
>
> Key: HADOOP-8448
> URL: https://issues.apache.org/jira/browse/HADOOP-8448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, scripts
>Affects Versions: 1.0.2
> Environment: VirtualBox 4.1.14 r77440
> Linux slack 2.6.37.6 #3 SMP Sat Apr 9 22:49:32 CDT 2011 x86_64 Intel(R) 
> Core(TM)2 Quad CPUQ8300  @ 2.50GHz GenuineIntel GNU/Linux 
> java version "1.7.0_04"
> Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
> Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)
> Hadoop 1.0.2
> Subversion 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 
> 1304954 Compiled by hortonfo on Sat Mar 24 23:58:21 UTC 2012
> From source with checksum c198b04303cfa626a38e13154d2765a9
> Hadoop is running under Pseudo-Distributed mode according to the 
> http://hadoop.apache.org/common/docs/r1.0.3/single_node_setup.html#PseudoDistributed
>Reporter: Evgeny Rusak
>
> After adding the additional java option to the HADOOP_JOBTRACKER_OPTS like 
> the following
>  export HADOOP_JOBTRACKER_OPTS="$HADOOP_JOBTRACKER_OPTS -Dxxx=yyy"
> and starting the hadoop instance with start-all.sh, the option added is being 
> attached several times according to the command
>  ps ax | grep jobtracker 
> which prints 
> .
> 29824 ?Sl22:29 home/hduser/apps/jdk/jdk1.7.0_04/bin/java  
>-Dproc_jobtracker -XX:MaxPermSize=256m 
> -Xmx600m -Dxxx=yyy -Dxxx=yyy
> -Dxxx=yyy -Dxxx=yyy -Dxxx=yyy 
> -Dhadoop.log.dir=/home/hduser/apps/hadoop/hadoop-1.0.2/libexec/../logs
> ..
>  The aforementioned unexpected behaviour causes the severe issue while 
> specifying "-agentpath:" option, because several duplicating agents being 
> considered as different agents are trying to be instantiated several times at 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8448) Java options being duplicated several times

2016-02-25 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-8448:
-
Assignee: Namit Jain

> Java options being duplicated several times
> ---
>
> Key: HADOOP-8448
> URL: https://issues.apache.org/jira/browse/HADOOP-8448
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, scripts
>Affects Versions: 1.0.2
> Environment: VirtualBox 4.1.14 r77440
> Linux slack 2.6.37.6 #3 SMP Sat Apr 9 22:49:32 CDT 2011 x86_64 Intel(R) 
> Core(TM)2 Quad CPUQ8300  @ 2.50GHz GenuineIntel GNU/Linux 
> java version "1.7.0_04"
> Java(TM) SE Runtime Environment (build 1.7.0_04-b20)
> Java HotSpot(TM) 64-Bit Server VM (build 23.0-b21, mixed mode)
> Hadoop 1.0.2
> Subversion 
> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r 
> 1304954 Compiled by hortonfo on Sat Mar 24 23:58:21 UTC 2012
> From source with checksum c198b04303cfa626a38e13154d2765a9
> Hadoop is running under Pseudo-Distributed mode according to the 
> http://hadoop.apache.org/common/docs/r1.0.3/single_node_setup.html#PseudoDistributed
>Reporter: Evgeny Rusak
>Assignee: Namit Jain
>
> After adding the additional java option to the HADOOP_JOBTRACKER_OPTS like 
> the following
>  export HADOOP_JOBTRACKER_OPTS="$HADOOP_JOBTRACKER_OPTS -Dxxx=yyy"
> and starting the hadoop instance with start-all.sh, the option added is being 
> attached several times according to the command
>  ps ax | grep jobtracker 
> which prints 
> .
> 29824 ?Sl22:29 home/hduser/apps/jdk/jdk1.7.0_04/bin/java  
>-Dproc_jobtracker -XX:MaxPermSize=256m 
> -Xmx600m -Dxxx=yyy -Dxxx=yyy
> -Dxxx=yyy -Dxxx=yyy -Dxxx=yyy 
> -Dhadoop.log.dir=/home/hduser/apps/hadoop/hadoop-1.0.2/libexec/../logs
> ..
>  The aforementioned unexpected behaviour causes the severe issue while 
> specifying "-agentpath:" option, because several duplicating agents being 
> considered as different agents are trying to be instantiated several times at 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15168019#comment-15168019
 ] 

Masatake Iwasaki commented on HADOOP-12711:
---

Thanks. The patch almost looks good.

Since {{URLEncoder#encode(String s)}} is deprecated, can you use 
{{URLEncoder#encode(String s, String enc)}} instead?


> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12711-branch-2.002.patch, 
> HADOOP-12711-branch-2.003.patch, HADOOP-12711.001.patch, 
> HADOOP-12711.002.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167991#comment-15167991
 ] 

Hadoop QA commented on HADOOP-12824:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 43s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 45s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12790003/HADOOP-12824-v004.patch
 |
| JIRA Issue | HADOOP-12824 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | 

[jira] [Commented] (HADOOP-12556) KafkaSink jar files are created but not copied to target dist

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167901#comment-15167901
 ] 

Hadoop QA commented on HADOOP-12556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-tools-dist in the patch passed with JDK 
v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-tools-dist in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771062/HADOOP-12556.patch |
| JIRA Issue | HADOOP-12556 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 41ab026fc99a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c4d4df8 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_72 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| JDK v1.7.0_95  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8722/testReport/ |
| modules | C: hadoop-tools/hadoop-tools-dist U: hadoop-tools/hadoop-tools-dist 
|
| Console output | 

[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-25 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v004.patch

Fixing nit.

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-25 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: (was: HADOOP-12824-v004.patch)

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-25 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HADOOP-12824:
-
Attachment: HADOOP-12824-v004.patch

Fixing nit from [~xyao].

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch, HADOOP-12824-v004.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12556) KafkaSink jar files are created but not copied to target dist

2016-02-25 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167856#comment-15167856
 ] 

Ravi Prakash commented on HADOOP-12556:
---

Hi Steve! Thanks for your review. Is there a reason you'd be reluctant to add 
kafka as a dependency to hadoop-tools ? These are the dependencies which would 
get added
{code}
< [INFO] +- org.apache.hadoop:hadoop-kafka:jar:3.0.0-SNAPSHOT:compile
< [INFO] |  \- org.apache.kafka:kafka_2.10:jar:0.8.2.1:compile
< [INFO] | +- com.yammer.metrics:metrics-core:jar:2.2.0:compile
< [INFO] | +- org.scala-lang:scala-library:jar:2.10.4:compile
< [INFO] | +- org.apache.kafka:kafka-clients:jar:0.8.2.1:compile
< [INFO] | |  \- net.jpountz.lz4:lz4:jar:1.2.0:compile
< [INFO] | +- net.sf.jopt-simple:jopt-simple:jar:3.2:compile
< [INFO] | \- com.101tec:zkclient:jar:0.3:compile
{code}
Are there guidelines for when a dependency is scoped as {{compile}} vs 
{{provided}}? I thought the reason you [wanted a new 
submodule|https://issues.apache.org/jira/browse/HADOOP-10949?focusedCommentId=14725921=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14725921]
 (hadoop-kafka) so that we didn't have to add kafka to the hadoop-common.

I agree that we should try to reduce our dependencies as much as possible, but 
I don't really see an alternative here. 

> KafkaSink jar files are created but not copied to target dist
> -
>
> Key: HADOOP-12556
> URL: https://issues.apache.org/jira/browse/HADOOP-12556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Attachments: HADOOP-12556.patch
>
>
> There is a hadoop-kafka artifact missing from hadoop-tools-dist's pom.xml 
> which was causing the compiled Kafka jar files not to be copied to the target 
> dist directory. The new patch adds this in order to complete this fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167819#comment-15167819
 ] 

Hadoop QA commented on HADOOP-12841:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 7s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 45s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 35s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789981/HADOOP-12841.001.patch
 |
| JIRA Issue | HADOOP-12841 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux bdbad3c7f662 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8808779 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_72 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 |
| unit | 

[jira] [Created] (HADOOP-12842) LocalFileSystem checksum file creation fails when source filename contains a colon

2016-02-25 Thread Plamen Jeliazkov (JIRA)
Plamen Jeliazkov created HADOOP-12842:
-

 Summary: LocalFileSystem checksum file creation fails when source 
filename contains a colon
 Key: HADOOP-12842
 URL: https://issues.apache.org/jira/browse/HADOOP-12842
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.4
Reporter: Plamen Jeliazkov
Assignee: Plamen Jeliazkov
Priority: Minor


In most FileSystems you can create a file with a colon character in it, 
including HDFS. If you try to use the LocalFileSystem implementation (which 
extends ChecksumFileSystem) to create a file with a colon character in it you 
get a URISyntaxException during the creation of the checksum file because of 
the use of {code}new Path(path, checksumFile){code} where checksumFile will be 
considered as a relative path during URI parsing due to starting with a "." and 
containing a ":" in the path.  

Running the following test inside TestLocalFileSystem causes the failure:
{code}
@Test
  public void testColonFilePath() throws Exception {
FileSystem fs = fileSys;
Path file = new Path(TEST_ROOT_DIR + Path.SEPARATOR + "fileWith:InIt");
fs.delete(file, true);
FSDataOutputStream out = fs.create(file);
try {
  out.write("text1".getBytes());
} finally {
  out.close();
}
}
{code}
With the following stack trace:
{code}
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path 
in absolute URI: .fileWith:InIt.crc
at java.net.URI.checkPath(URI.java:1804)
at java.net.URI.(URI.java:752)
at org.apache.hadoop.fs.Path.initialize(Path.java:201)
at org.apache.hadoop.fs.Path.(Path.java:170)
at org.apache.hadoop.fs.Path.(Path.java:92)
at 
org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:88)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:397)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:921)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:902)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:798)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at 
org.apache.hadoop.fs.TestLocalFileSystem.testColonFilePath(TestLocalFileSystem.java:625)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12841:
-
Status: Patch Available  (was: Open)

> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated {fs.s3a.awsAccessKeyId}/{fs.s3a.awsSecretAccessKey} 
> in favor of {fs.s3a.access.key}/{fs.s3a.secret.key} in the code, but did not 
> update core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12824) Collect network and disk usage on the node in Windows

2016-02-25 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167701#comment-15167701
 ] 

Xiaoyu Yao commented on HADOOP-12824:
-

Thanks [~elgoiri] for updating the patch and adding the unit tests.
One NIT: you can eliminate local variable *totalFound* by break out of the loop 
if "_Total" is ever found from the returned items. 

{code}
256 for (i = 0; i < dwItemCount; i++) {
257   if (wcscmp(L"_Total", pItems[i].szName) == 0) { 
259 *ret = pItems[i].RawValue.FirstValue;
break;
260   } else {
261 *ret += pItems[i].RawValue.FirstValue;
262   }
263 }
{code}

> Collect network and disk usage on the node in Windows
> -
>
> Key: HADOOP-12824
> URL: https://issues.apache.org/jira/browse/HADOOP-12824
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HADOOP-12824-v000.patch, HADOOP-12824-v001.patch, 
> HADOOP-12824-v002.patch, HADOOP-12824-v003.patch
>
>
> HADOOP-12210 collects the node network usage for Linux; this JIRA does it for 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12841:
-
Attachment: HADOOP-12841.001.patch

Rev01: based on 
https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html,
I added a few missing properties for S3/S3A/S3N access key and secret key.


> Update s3-related properties in core-default.xml
> 
>
> Key: HADOOP-12841
> URL: https://issues.apache.org/jira/browse/HADOOP-12841
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
> Attachments: HADOOP-12841.001.patch
>
>
> HADOOP-11670 deprecated {fs.s3a.awsAccessKeyId}/{fs.s3a.awsSecretAccessKey} 
> in favor of {fs.s3a.access.key}/{fs.s3a.secret.key} in the code, but did not 
> update core-default.xml. Also, a few S3 related properties are missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12841) Update s3-related properties in core-default.xml

2016-02-25 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12841:


 Summary: Update s3-related properties in core-default.xml
 Key: HADOOP-12841
 URL: https://issues.apache.org/jira/browse/HADOOP-12841
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.7.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


HADOOP-11670 deprecated {fs.s3a.awsAccessKeyId}/{fs.s3a.awsSecretAccessKey} in 
favor of {fs.s3a.access.key}/{fs.s3a.secret.key} in the code, but did not 
update core-default.xml. Also, a few S3 related properties are missing.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167523#comment-15167523
 ] 

Hadoop QA commented on HADOOP-12711:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
18s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 19s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 18s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s 
{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 5 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 58s 
{color} | {color:red} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 1 new + 789 
unchanged - 0 fixed = 790 total (was 789) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 16s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 872 
unchanged - 0 fixed = 873 total (was 872) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 10 unchanged - 4 fixed = 10 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 61 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 34s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 17s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (HADOOP-10315) Log the original exception when getGroups() fail in UGI.

2016-02-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167507#comment-15167507
 ] 

Ted Yu commented on HADOOP-10315:
-

By default, Negative Cache is enabled.
>From API point of view, it is unclear whether the exception (if any) comes 
>from negativeCache containing the user or, from retrieval from cache.

> Log the original exception when getGroups() fail in UGI.
> 
>
> Key: HADOOP-10315
> URL: https://issues.apache.org/jira/browse/HADOOP-10315
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.10, 2.2.0
>Reporter: Kihwal Lee
>Assignee: Ted Yu
> Attachments: HADOOP-10315.v1.patch
>
>
> In UserGroupInformation, getGroupNames() swallows the original exception. 
> There have been many occasions that more information on the original 
> exception could have helped.
> {code}
>   public synchronized String[] getGroupNames() {
> ensureInitialized();
> try {
>   List result = groups.getGroups(getShortUserName());
>   return result.toArray(new String[result.size()]);
> } catch (IOException ie) {
>   LOG.warn("No groups available for user " + getShortUserName());
>   return new String[0];
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167436#comment-15167436
 ] 

Hadoop QA commented on HADOOP-12767:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12767 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12789936/HADOOP-12767.003.patch
 |
| JIRA Issue | HADOOP-12767 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8720/console |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12835) RollingFileSystemSink can throw an NPE on non-secure clusters

2016-02-25 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167414#comment-15167414
 ] 

Daniel Templeton commented on HADOOP-12835:
---

Oops, I did it again.  Moving this JIRA over to HDFS because I need to touch 
the HDFS test classes.  *sigh*

> RollingFileSystemSink can throw an NPE on non-secure clusters
> -
>
> Key: HADOOP-12835
> URL: https://issues.apache.org/jira/browse/HADOOP-12835
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12835.001.patch
>
>
> If the sink init fails (such as because the HDFS cluster isn't running) on a 
> non-secure cluster, the init will throw an NPE because of missing properties.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12711) Remove dependency on commons-httpclient for ServletUtil

2016-02-25 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12711:
-
Attachment: HADOOP-12711-branch-2.003.patch

Thanks [~iwasakims] for valuable comments.
I followed your suggestion and the code is cleaner. Sorry for not being able to 
turn around sooner.

Attaching rev03 for precommit testing.

> Remove dependency on commons-httpclient for ServletUtil
> ---
>
> Key: HADOOP-12711
> URL: https://issues.apache.org/jira/browse/HADOOP-12711
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12711-branch-2.002.patch, 
> HADOOP-12711-branch-2.003.patch, HADOOP-12711.001.patch, 
> HADOOP-12711.002.patch
>
>
> This is a branch-2 only change, as ServletUtil for trunk removes the code 
> that depends on commons-httpclient.
> We need to retire the use of commons-httpclient in Hadoop to address the 
> security concern in CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> {noformat}
> import org.apache.commons.httpclient.URIException;
> import org.apache.commons.httpclient.util.URIUtil;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12837) FileStatus.getModificationTime not working on S3

2016-02-25 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167317#comment-15167317
 ] 

Chris Nauroth commented on HADOOP-12837:


[~jagdishk], can you provide more details about what exactly is not working for 
you due to the lack of directory mtime?  The description mentions something 
about a job.  Is that a MapReduce job?  If so, how does it fail?  Is there an 
error message or a stack trace?  If I see those details, I might be able to 
suggest a workaround.

> FileStatus.getModificationTime not working on S3
> 
>
> Key: HADOOP-12837
> URL: https://issues.apache.org/jira/browse/HADOOP-12837
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Jagdish Kewat
>
> Hi Team,
> We have observed an issue with the FileStatus.getModificationTime() API on S3 
> filesystem. The method always returns 0.
> I googled for this however couldn't find any solution as such which would fit 
> in my scheme of things. S3FileStatus seems to be an option however I would be 
> using this API on HDFS as well as S3 both so can't go for it.
> I tried to run the job on:
> * Release label:emr-4.2.0
> * Hadoop distribution:Amazon 2.6.0
> * Hadoop Common jar: hadoop-common-2.6.0.jar
> Please advise if any patch or fix available for this.
> Thanks,
> Jagdish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-25 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167293#comment-15167293
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


 [~fabbri] Thanks a lot for your comments.

h6. For FileStatus Cache - I agree on the race condition situations. My 
question about your concern is, could it break any functionality in such a 
situation? and I think it would not break any common functionality. Based on 
the variety of Hadoop applications we have executed with this code.

So let me try to break down the discussion based on the scenarios.
** *What is FileStatus Cache?* 
*** FileStatus cache is simple process level cache which mirrors 
backend storage FileStatus objects.
*** Time to live on the FileStatus cached object is limited. 5 seconds 
default and configurable through core-site.xml
*** FileStatus objects are stored in Synchronized LinkedHashMap. Where 
key is fully qualified file path and value is FileStatus java object along with 
time to live information.
*** FileStatus cache is built based on successful responses to 
GetFileStatus and ListStatus calls for existing files/folders. Non existent 
files/folder are not maintained in the cache.
*** FileStatus cache motivation is to avoid multiple GetFileStatus 
calls to the ADL backend and as a result gain better performance for job 
startup and during execution.
I will try to break down in to some scenarios that may occur.
** *Scenario 1 : Concurrent get request for the same FileStatus object*
*** Multiple threads trying to access same FileStatus object.
Example: GetFileStatus call for path /a.txt from multiple threads 
within process when FileStatus instance present in the cache.
*** Should not be a problem, Valid FileStatus object is returned to 
caller across threads.
** *Scenario 2 : Concurrent put request for the same FileStatus object*
*** Multiple threads updating same FileStatus object.
{code:java}
public String thread1()
{
// FileStatus fileStatus - For storage filepath /a.txt 
...
fileStatusCacheManager.put(fileStatus,5); // Race condition
...
}
...
public String thread2()
{
// FileStatus fileStatus - For storage file /a.txt 
...
fileStatusCacheManager.put(fileStatus,5); // Race condition
...
}
{code}
*** Whoever wins the race, Metadata for FileStatus instance would be 
constant for the same file /a.txt
*** Hence the latest and greatest value for /a.txt is valid value 
anyway.
** *Scenario 3 : Concurrent get/put request for the same FileStatus object*
{code:java}
public String thread1()
{
// FileStatus fileStatus - For storage filepath /a.txt 
...
fileStatusCacheManager.put(fileStatus,5); // Race condition
...
}
...
public String thread2()
{
Path f = new Path("/a.txt");
...
FileStatus fileStatus = 
fileStatusCacheManager.get(makeQualified(f)); // Race condition
...
}
{code}
*** Depending upon order of execution thread2 may or may not get latest 
value updated from thread1. Even synchronization of blocks are not going to 
guarantee that.
*** Worst case thread2 gets NULL i.e. FileStatus object for /a.txt does 
not exist in the cache so thread2 would fall back to invoke ADL backend call to 
GetFileStatus.
*** Does not break any functionality in this case as well.
** *Scenario 4: Concurrent get/remove request for the same FileStatus object*
{code:java}
public String thread1()
{
Path f = new Path("/a.txt");
...
fileStatusCacheManager.remove(makeQualified(f)); // Cache 
cleanup caused due to delete/rename/Create operation on /a.txt. Race condition
...
}
...
public String thread2()
{
Path f = new Path("/a.txt");
...
FileStatus fileStatus = 
fileStatusCacheManager.get(makeQualified(f)); // Race condition
...
}
{code}
*** Depending upon order of execution thread2 may get stale information 
from the cache. Similar to the above scenario, synchronization of blocks are 
not going to solve this either
*** Unavoidable situation with/without FileStatus cache and 
with/without ADL storage backend.
** *Scenario 5: Concurrent put/remove request for the different FileStatus 
object*
{code:java}
public String thread1()
{
Path f = new Path("/a.txt");
...

[jira] [Updated] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-25 Thread Artem Aliev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Aliev updated HADOOP-12767:
-
Attachment: HADOOP-12767.003.patch

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch, 
> HADOOP-12767.003.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-25 Thread Artem Aliev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167213#comment-15167213
 ] 

Artem Aliev commented on HADOOP-12767:
--

[~jojochuang], I have found the problem. httpcore version should be also 
updated + small NPE fix. See attached patch.

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-25 Thread Artem Aliev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Artem Aliev updated HADOOP-12767:
-
Attachment: HADOOP-12767.002.patch

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch, HADOOP-12767.002.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-25 Thread Artem Aliev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15167160#comment-15167160
 ] 

Artem Aliev commented on HADOOP-12767:
--

The build is ok, the problem is in the tests compilation. I have checked with 
trunk
{code}
hadoop$> mvn test
{code}


> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12840) UGI to log@ debug stack traces when failing to find groups for a user

2016-02-25 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12840:
---

 Summary: UGI to log@ debug stack traces when failing to find 
groups for a user
 Key: HADOOP-12840
 URL: https://issues.apache.org/jira/browse/HADOOP-12840
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


If {{UGI.getGroupNames()}} catches an IOE raised by 
{{groups.getGroups(getShortUserName())}} then it simply logs @ debug "No groups 
available for user". The text from the caught exception and stack trace are not 
printed.

One IOException raised is the explicit "user not in groups" exception, but 
there could be other causes too —if that happens the entire problem will be 
missed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-02-25 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15166958#comment-15166958
 ] 

Jian He commented on HADOOP-12622:
--

How about change this to say : ". Retrying + formatSleepMessage(delay)" . as it 
actually does sleep and retry.
{code}
  msg += ". Not trying to fail over as failOverAction is null.";
}
{code}



> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622-v4.patch, HADOOP-12622-v5.patch, 
> HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
>