[jira] [Commented] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755699#comment-16755699
 ] 

Akira Ajisaka commented on HADOOP-16088:


+1, thank you for your quick fix.

bq. But not sure if there any test failures later in the module
Okay. If some test failures occurs due to the upgrade, I can help fixing them.

> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Blocker
> Attachments: HADOOP-16088.01.patch
>
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755691#comment-16755691
 ] 

Hadoop QA commented on HADOOP-16088:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
45s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16088 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956837/HADOOP-16088.01.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux c81e4412d35a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1129288 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15872/testReport/ |
| Max. process+thread count | 721 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15872/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith 

[jira] [Comment Edited] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.

2019-01-29 Thread vaibhav beriwala (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755668#comment-16755668
 ] 

vaibhav beriwala edited comment on HADOOP-13402 at 1/30/19 4:35 AM:


The following two scenarios in rename use to work for NativeS3FileSystem but 
does not work for S3AFilesystem
 1) initial directory layout is of form : a/b/c/file
 rename parameters are - *src*: a/b/c/file and *dst*: a/
 expected final directory structure: a/file

2) initial directory layout is of form : a/b/c/file
 rename parameters- *src*: a/b/c and *dst*: a/
 expected final directory structure: a/c/file

The reason the above used to work in NativeS3FileSystem was because of the 
following logic in its code where it used to append src.getName() to dst:
{code:java}
// Move to within the existent directory 
dstKey = pathToKey(makeAbsolute(new Path(dst, src.getName(;{code}
 

Could we have a similar logic in S3AFilesystem to make the above 2 scenarios 
work. 

Something like:-
{code:java}
S3AFileStatus dstStatus = null; 

try { dstStatus = innerGetFileStatus(dst, true); 
  if (dstStatus.isDirectory() && !src.getName().equals(dst.getName())){ 
dstKey = pathToKey(new Path(dst, src.getName())); 
dst = new Path(dst, src.getName()); 
dstStatus = innerGetFileStatus(dst, true); 
  }
.
.
.
{code}


was (Author: vaibhavb):
The following two scenarios in rename use to work for NativeS3FileSystem but 
does not work for S3AFilesystem
1) initial directory layout is of form : a/b/c/file
rename parameters are - *src*: a/b/c/file and *dst*: a/
expected final directory structure: a/file

2) initial directory layout is of form : a/b/c/file
rename parameters- *src*: a/b/c and *dst*: a/
expected final directory structure: a/c/file

Thee reason the above used to work in NativeS3FileSystem was because of the 
following logic in its code where it used to append src.getName() to dst:
{code:java}
// Move to within the existent directory 
dstKey = pathToKey(makeAbsolute(new Path(dst, src.getName(;{code}
 

Could we have a similar logic in S3AFilesystem to make the above 2 scenarios 
work. 

Something like:-
{code:java}
S3AFileStatus dstStatus = null; 

try { dstStatus = innerGetFileStatus(dst, true); 
  if (dstStatus.isDirectory() && !src.getName().equals(dst.getName())){ 
dstKey = pathToKey(new Path(dst, src.getName())); 
dst = new Path(dst, src.getName()); 
dstStatus = innerGetFileStatus(dst, true); 
  }
.
.
.
{code}

> S3A should allow renaming to a pre-existing destination directory to move the 
> source path under that directory, similar to HDFS.
> 
>
> Key: HADOOP-13402
> URL: https://issues.apache.org/jira/browse/HADOOP-13402
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In HDFS, a rename to a destination path that is a pre-existing directory is 
> interpreted as moving the source path relative to that pre-existing 
> directory.  In S3A, this operation currently fails (does nothing and returns 
> {{false}}), unless that destination directory is empty.  This issue proposes 
> to change S3A to allow this behavior, so that it more closely matches the 
> semantics of HDFS and other file systems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.

2019-01-29 Thread vaibhav beriwala (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755668#comment-16755668
 ] 

vaibhav beriwala commented on HADOOP-13402:
---

The following two scenarios in rename use to work for NativeS3FileSystem but 
does not work for S3AFilesystem
1) initial directory layout is of form : a/b/c/file
rename parameters are - *src*: a/b/c/file and *dst*: a/
expected final directory structure: a/file

2) initial directory layout is of form : a/b/c/file
rename parameters- *src*: a/b/c and *dst*: a/
expected final directory structure: a/c/file

Thee reason the above used to work in NativeS3FileSystem was because of the 
following logic in its code where it used to append src.getName() to dst:
{code:java}
// Move to within the existent directory 
dstKey = pathToKey(makeAbsolute(new Path(dst, src.getName(;{code}
 

Could we have a similar logic in S3AFilesystem to make the above 2 scenarios 
work. 

Something like:-
{code:java}
S3AFileStatus dstStatus = null; 

try { dstStatus = innerGetFileStatus(dst, true); 
  if (dstStatus.isDirectory() && !src.getName().equals(dst.getName())){ 
dstKey = pathToKey(new Path(dst, src.getName())); 
dst = new Path(dst, src.getName()); 
dstStatus = innerGetFileStatus(dst, true); 
  }
.
.
.
{code}

> S3A should allow renaming to a pre-existing destination directory to move the 
> source path under that directory, similar to HDFS.
> 
>
> Key: HADOOP-13402
> URL: https://issues.apache.org/jira/browse/HADOOP-13402
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In HDFS, a rename to a destination path that is a pre-existing directory is 
> interpreted as moving the source path relative to that pre-existing 
> directory.  In S3A, this operation currently fails (does nothing and returns 
> {{false}}), unless that destination directory is empty.  This issue proposes 
> to change S3A to allow this behavior, so that it more closely matches the 
> semantics of HDFS and other file systems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755657#comment-16755657
 ] 

Rohith Sharma K S commented on HADOOP-16088:


Attached quick patch to unblock build failure. But not sure if there any test 
failures later in the module

> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Blocker
> Attachments: HADOOP-16088.01.patch
>
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-16088:
---
Attachment: HADOOP-16088.01.patch

> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Blocker
> Attachments: HADOOP-16088.01.patch
>
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-16088:
---
Status: Patch Available  (was: Open)

> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Blocker
> Attachments: HADOOP-16088.01.patch
>
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-16088:
---
Target Version/s: 3.3.0

> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-16088:
---
Description: 
Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 

{noformat}
HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
-Dhbase.profile=2.0
[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
missing. @ line 485, column 21
 @
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project 
org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT 
(/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
 has 1 error
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
is missing. @ line 485, column 21
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
{noformat}

cc:/ [~ajisakaa]

  was:
Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 

{noformat}
HW12723:hadoop rsharmaks$ mci
[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
missing. @ line 485, column 21
 @
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project 
org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT 
(/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
 has 1 error
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
is missing. @ line 485, column 21
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
{noformat}

cc:/ [~ajisakaa]


> Build failure for -Dhbase.profile=2.0
> -
>
> Key: HADOOP-16088
> URL: https://issues.apache.org/jira/browse/HADOOP-16088
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Blocker
>
> Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 
> {noformat}
> HW12723:hadoop rsharmaks$ mvn clean install -DskipTests -DskipShade 
> -Dhbase.profile=2.0
> [INFO] Scanning for projects...
> [ERROR] [ERROR] Some problems were encountered while processing the POMs:
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
> missing. @ line 485, column 21
>  @
> [ERROR] The build could not read 1 project -> [Help 1]
> [ERROR]
> [ERROR]   The project 
> org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT
>  
> (/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
>  has 1 error
> [ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
> is missing. @ line 485, column 21
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
> {noformat}
> cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16088) Build failure for -Dhbase.profile=2.0

2019-01-29 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created HADOOP-16088:
--

 Summary: Build failure for -Dhbase.profile=2.0
 Key: HADOOP-16088
 URL: https://issues.apache.org/jira/browse/HADOOP-16088
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rohith Sharma K S


Post HADOOP-14178, hadoop build failure due to incorrect pom.xml. 

{noformat}
HW12723:hadoop rsharmaks$ mci
[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar is 
missing. @ line 485, column 21
 @
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project 
org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-tests:3.3.0-SNAPSHOT 
(/Users/rsharmaks/Repos/Apache/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests/pom.xml)
 has 1 error
[ERROR] 'dependencies.dependency.version' for org.mockito:mockito-all:jar 
is missing. @ line 485, column 21
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
{noformat}

cc:/ [~ajisakaa]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Da Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Da Zhou updated HADOOP-16041:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for making the change!  Mark as resolved.

> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch, 
> HADOOP-16041.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.23.4

2019-01-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755606#comment-16755606
 ] 

Hudson commented on HADOOP-14178:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15850 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15850/])
HADOOP-14178. Move Mockito up to version 2.23.4. Contributed by Akira 
(aajisaka: rev 1129288cf5045e17b0e761a0d75f40bf2fe6de03)
* (edit) hadoop-hdds/framework/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockCountersInPendingIBR.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestQueueStateManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/TestNodesListManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDeleteRace.java
* (edit) hadoop-common-project/hadoop-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/amfilter/TestAmFilter.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/component/instance/TestComponentInstance.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/collector/TestPerNodeTimelineCollectorsAuxService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/timeline/webapp/TestTimelineWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestTrafficControlBandwidthHandlerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/resourceplugin/deviceframework/TestDevicePluginAdapter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestContainerLocalizer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/reader/TestTimelineReaderWhitelistAuthorizationFilter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerRecovery.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulerUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/MockRunningServiceContext.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestApplicationACLs.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsJobBlock.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestStorageReport.java
* (edit) 

[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.23.4

2019-01-29 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~iwasakims] for reviewing the big patches, and 
thanks all who contributed to this issue!

> Move Mockito up to version 2.23.4
> -
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch, HADOOP-14178.030.patch, 
> HADOOP-14178.031.patch, HADOOP-14178.032.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.23.4

2019-01-29 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14178:
---
Summary: Move Mockito up to version 2.23.4  (was: Move Mockito up to 
version 2.x)

> Move Mockito up to version 2.23.4
> -
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch, HADOOP-14178.030.patch, 
> HADOOP-14178.031.patch, HADOOP-14178.032.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15549) Upgrade to commons-configuration 2.1 regresses task CPU consumption

2019-01-29 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1675#comment-1675
 ] 

Yuming Wang commented on HADOOP-15549:
--

Thanks [~ste...@apache.org]. Two new JIRAs have been created:

https://issues.apache.org/jira/browse/HADOOP-16086
 https://issues.apache.org/jira/browse/HADOOP-16087

> Upgrade to commons-configuration 2.1 regresses task CPU consumption
> ---
>
> Key: HADOOP-15549
> URL: https://issues.apache.org/jira/browse/HADOOP-15549
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: hadoop-15549.txt
>
>
> HADOOP-13660 upgraded from commons-configuration 1.x to 2.x. 
> commons-configuration is used when parsing the metrics configuration 
> properties file. The new builder API used in the new version apparently makes 
> use of a bunch of very bloated reflection and classloading nonsense to 
> achieve the same goal, and this results in a regression of >100ms of CPU time 
> as measured by a program which simply initializes DefaultMetricsSystem.
> This isn't a big deal for long-running daemons, but for MR tasks which might 
> only run a few seconds on poorly-tuned jobs, this can be noticeable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755544#comment-16755544
 ] 

Hudson commented on HADOOP-16041:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15849 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15849/])
HADOOP-16041. Include Hadoop version in User-Agent string for ABFS. 
(mackrorysd: rev 02eb91856e7e8477c62e0f8bf1bac6de3e00a8a4)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/TestAbfsClient.java


> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch, 
> HADOOP-16041.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-01-29 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755531#comment-16755531
 ] 

Ayush Saxena commented on HADOOP-16077:
---

Thanx [~ste...@apache.org] for the review.

The support for -e and -sp to be used together was there ; I have extended a 
Test for the same too.
Have handled all comments as part of v4.

Pls Review :)

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755525#comment-16755525
 ] 

Sean Mackrory commented on HADOOP-16041:


Committed. Confirmed, for everyone's reference, that the User-Agent now looks 
like this for me:
{code}Azure Blob FS/3.3.0-SNAPSHOT (JavaJRE 1.8.0_191; Linux 
4.15.0-43-generic){code}

All tests pass, except for this one that's always being weird (although I 
thought it was just WASB compat tests that were being weird and this is 
different):
{code}ITestGetNameSpaceEnabled.testNonXNSAccount:57->Assert.assertFalse:64->Assert.assertTrue:41->Assert.fail:88
 Expecting getIsNamespaceEnabled() return false{code}

> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch, 
> HADOOP-16041.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755510#comment-16755510
 ] 

Hadoop QA commented on HADOOP-16041:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  7m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956803/HADOOP-16041.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 87877dc67e0a 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 04105bb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15871/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15871/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
>  

[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755486#comment-16755486
 ] 

Siyao Meng commented on HADOOP-16082:
-

[~ste...@apache.org] Thanks for the comment.
Yeah I'm also feeling that a new property in FileStatus could break (a bunch 
of) things.
I could override toString() in class HdfsLocatedFileStatus and insert 
HDFS-specific fields like this:
{code:java|title=HdfsLocatedFileStatus#toString()}
  @Override
  public String toString() {
String res = super.toString();
StringBuilder sb = new StringBuilder();
sb.append(res.substring(0, res.length() - 1));
sb.append("; fileId=" + fileId);
sb.append(res.substring(res.length() - 1));
return sb.toString();
  }
{code}
It works but this seems more like a hack.

[~ajisakaa] Thanks for the suggestion. but the point is it still won't allow me 
to easily access *fileId* property as long as FileStatus doesn't expose it, 
since *fileId* is internal to HdfsLocatedFileStatus:
{code:java|title=./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java#processPath}
  @Override
  protected void processPath(PathData item) throws IOException {
FileStatus stat = item.stat;
...
{code}

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755475#comment-16755475
 ] 

Hadoop QA commented on HADOOP-16068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 2 
new + 4 unchanged - 0 fixed = 6 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 23 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-tools_hadoop-azure generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 12s{color} 
| {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.azurebfs.extensions.TestDTManagerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956801/HADOOP-16068-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c827a2bac7a3 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 04105bb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15870/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15870/artifact/out/whitespace-eol.txt
 |
| javadoc | 

[jira] [Updated] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16041:

Attachment: HADOOP-16041.003.patch

> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch, 
> HADOOP-16041.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755449#comment-16755449
 ] 

Shweta commented on HADOOP-16041:
-

Thanks [~mackrorysd] for the find. Uploaded new patch to reflect change.

> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch, 
> HADOOP-16041.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755439#comment-16755439
 ] 

Sean Mackrory edited comment on HADOOP-16041 at 1/29/19 10:39 PM:
--

+1. Will commit. (edit: as briefly discussed offline - there is now a space 
after the slash that appears to be unintentional)


was (Author: mackrorysd):
+1. Will commit.

> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Patch Available  (was: Open)

Patch 002
* Addresses checkstyle issues.
* when DTs are on, returns the FS URI in ABFS.getCanonicalServiceName()
* if a token returned already as a Kind, don't overwrite it with a local 
instance (allows for independent unmarshalling, renewal, cancel)
* test of the DT binding process using a stub token manager

FWIW, without getCanonicalServiceName returning the FS URI, I'm not convinced 
that MR or Spark actually collects the DT

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Open  (was: Patch Available)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Status: Patch Available  (was: Open)

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16041) UserAgent string for ABFS

2019-01-29 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755439#comment-16755439
 ] 

Sean Mackrory commented on HADOOP-16041:


+1. Will commit.

> UserAgent string for ABFS
> -
>
> Key: HADOOP-16041
> URL: https://issues.apache.org/jira/browse/HADOOP-16041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16041.001.patch, HADOOP-16041.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16068) ABFS Auth and DT plugins to be bound to specific URI of the FS

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16068:

Attachment: HADOOP-16068-002.patch

> ABFS Auth and DT plugins to be bound to specific URI of the FS
> --
>
> Key: HADOOP-16068
> URL: https://issues.apache.org/jira/browse/HADOOP-16068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16068-001.patch, HADOOP-16068-002.patch
>
>
> followup from HADOOP-15692: pass in the URI & conf of the owner FS to bind 
> the plugins to the specific FS instance. Without that you can't have per FS 
> auth
> +add a stub DT plugin for testing, verify that DTs are collected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-01-29 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755424#comment-16755424
 ] 

Siyao Meng edited comment on HADOOP-16083 at 1/29/19 10:15 PM:
---

[~ste...@apache.org] Thanks for the comment.
The case involved here is *single file* copy with CRC check enabled. If the 
target path is a directory, *for HDFS* it will skip files that has the same 
checksum. For example, the command below will always copy and overwrite the 
file at the target.
{code:bash|title=Target path is a file}
$ hadoop distcp -update hdfs:///src/2.txt hdfs:///dst/2.txt
...
Bytes Copied=6
Bytes Expected=6
Files Copied=1
{code}
But this command wouldn't skip the copy if the target file exists and is the 
same, note the target path is a directory now instead of a file.
{code:bash|title=Target path is a directory}
$ hadoop distcp -update hdfs:///src/2.txt hdfs:///dst/
...
Bytes Skipped=6
Files Skipped=1
{code}

I'm aware that the change might lead to subtle failures of other applications. 
For HDFS, so far I figured the only thing that changed with the patch would be 
the modification time of the file on target FS. This is tested in 
TestCopyMapper#testSingleFileCopy() and 
TestCopyMapperCompositeCrc#testSingleFileCopy(). Both test case failures would 
be addressed in rev 002.

I agree that if the target FS 1. doesn't support checksum; or 2. disabled 
checksum; or 3. doesn't support timestamp modification (touch), the file should 
always be copied.

What new tests should be added in AbstractContractDistCpTest?


was (Author: smeng):
[~ste...@apache.org] Thanks for the comment.
The case involved here is *single file* copy with CRC check enabled. If the 
target path is a directory, *for HDFS* it will skip files that has the same 
checksum. For example, the command below will always copy and overwrite the 
file at the target.
{code:bash|title=Target path is a file}
$ hadoop distcp -update hdfs:///src/2.txt hdfs:///dst/2.txt
...
Bytes Copied=6
Bytes Expected=6
Files Copied=1
{code}
But this command wouldn't skip the copy is the target file exists and is the 
same, note the target path is a directory now instead of a file.
{code:bash|title=Target path is a directory}
$ hadoop distcp -update hdfs:///src/2.txt hdfs:///dst/
...
Bytes Skipped=6
Files Skipped=1
{code}

I'm aware that the change might lead to subtle failures of other applications. 
For HDFS, so far I figured the only thing that changed with the patch would be 
the modification time of the file on target FS. This is tested in 
TestCopyMapper#testSingleFileCopy() and 
TestCopyMapperCompositeCrc#testSingleFileCopy(). Both test case failures would 
be addressed in rev 002.

I agree that if the target FS 1. doesn't support checksum; or 2. disabled 
checksum; or 3. doesn't support timestamp modification (touch), the file should 
always be copied.

What new tests should be added in AbstractContractDistCpTest?

> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-01-29 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755424#comment-16755424
 ] 

Siyao Meng commented on HADOOP-16083:
-

[~ste...@apache.org] Thanks for the comment.
The case involved here is *single file* copy with CRC check enabled. If the 
target path is a directory, *for HDFS* it will skip files that has the same 
checksum. For example, the command below will always copy and overwrite the 
file at the target.
{code:bash|title=Target path is a file}
$ hadoop distcp -update hdfs:///src/2.txt hdfs:///dst/2.txt
...
Bytes Copied=6
Bytes Expected=6
Files Copied=1
{code}
But this command wouldn't skip the copy is the target file exists and is the 
same, note the target path is a directory now instead of a file.
{code:bash|title=Target path is a directory}
$ hadoop distcp -update hdfs:///src/2.txt hdfs:///dst/
...
Bytes Skipped=6
Files Skipped=1
{code}

I'm aware that the change might lead to subtle failures of other applications. 
For HDFS, so far I figured the only thing that changed with the patch would be 
the modification time of the file on target FS. This is tested in 
TestCopyMapper#testSingleFileCopy() and 
TestCopyMapperCompositeCrc#testSingleFileCopy(). Both test case failures would 
be addressed in rev 002.

I agree that if the target FS 1. doesn't support checksum; or 2. disabled 
checksum; or 3. doesn't support timestamp modification (touch), the file should 
always be copied.

What new tests should be added in AbstractContractDistCpTest?

> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] milleruntime commented on issue #473: HADOOP-11223. Create UnmodifiableConfiguration

2019-01-29 Thread GitBox
milleruntime commented on issue #473: HADOOP-11223. Create 
UnmodifiableConfiguration
URL: https://github.com/apache/hadoop/pull/473#issuecomment-458714089
 
 
   Renaming this class to UnmodifiableConfiguration since it is not immutable.  
It more closely resembles the java unmodifiable collections: 
https://docs.oracle.com/javase/7/docs/api/java/util/Collections.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-01-29 Thread Michael Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755371#comment-16755371
 ] 

Michael Miller commented on HADOOP-11223:
-

The name is wrong since it is not actually immutable.  The class I created is 
closer to the Java unmodifiable collections.  I will rename the class.  I think 
an unmodifiable configuration would be good to have at least.  A truly 
immutable configuration, I think would be more complicated and involve changes 
to Configuration.

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Varun Saxena
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755316#comment-16755316
 ] 

Hadoop QA commented on HADOOP-16077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.TestClientMetrics |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956729/HADOOP-16077-04.patch 
|
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 50da7d1bdd7b 

[jira] [Commented] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755314#comment-16755314
 ] 

Hadoop QA commented on HADOOP-16086:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
37s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
50s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
23s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:080e9d0 |
| JIRA Issue | HADOOP-16086 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956755/HADOOP-16086-branch-3.1-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4b8967dd212a 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.1 / 4257043 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15868/testReport/ |
| Max. process+thread count | 1717 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15868/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HADOOP-15387) Produce a shaded hadoop-cloud-storage JAR for applications to use

2019-01-29 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755284#comment-16755284
 ] 

Sean Busbey commented on HADOOP-15387:
--

[~ste...@apache.org] can you help me on understanding scope here a bit?

I think what the description says is we end up with a single jar where all 
classes are in {{org.apache.hadoop}} or {{software.amazon.awssdk}} and we rely 
on shading to relocate any others (modulo the normal caveats on logging / 
tracing libraries that came up during the hadoop-client modules).

Does it need to be all of the Amazon AWS SDK? Is there some interface jar that 
we could use while allowing BYO-SDK? Or for that matter could we just update 
the various cloud storage modules to individually relocate things that aren't 
either hadoop-client-facing or their respective service's SDK?

> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755298#comment-16755298
 ] 

Hadoop QA commented on HADOOP-16087:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-3.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
31s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
48s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} branch-3.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-3.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:e402791 |
| JIRA Issue | HADOOP-16087 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956756/HADOOP-16087-branch-3.0-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 536b13ee726e 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-3.0 / d182c81 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15869/testReport/ |
| Max. process+thread count | 1393 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15869/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message 

[jira] [Comment Edited] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755302#comment-16755302
 ] 

Akira Ajisaka edited comment on HADOOP-16082 at 1/29/19 7:07 PM:
-

How about adding "%i" to "hadoop fs -stat" instead of ls?


was (Author: ajisakaa):
How about adding "-i" option to "hadoop fs -stat" instead of ls?

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755302#comment-16755302
 ] 

Akira Ajisaka commented on HADOOP-16082:


How about adding "-i" option to "hadoop fs -stat" instead of ls?

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-01-29 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755292#comment-16755292
 ] 

Ben Roling commented on HADOOP-16085:
-

[~gabor.bota] thanks for the response.  You are hitting on a different issue 
than I was trying to get at.  The problem I am referencing (assuming I 
understand the system well enough myself) is present even if all parties are 
always using S3Guard with the same config.

Consider a process writes s3a://my-bucket/foo.txt with content "abc".  Another 
process comes along later and does an overwrite of s3a://my-bucket/foo.txt with 
"def".  Finally, a reader process reads s3a://my-bucket/foo.txt.  S3Guard 
ensures the reader knows that s3a://my-bucket/foo.txt exists and that the 
reader sees something, but does nothing to ensure the reader sees "def" instead 
of "abc" as the content.

That was a contrived example.  One place I see this as likely to occur is 
during failure and retry scenarios of multi-stage ETL pipelines.  Stage 1 runs, 
writing to s3://my-bucket/output, but fails after writing only some of the 
output files that were supposed to go into that directory.  The stage is re-run 
with the same output directory in an overwrite mode such that the original 
output is deleted and the job is rerun with the same s3://my-bucket/output 
target directory.  This time the run is successful, so the ETL continues to 
Stage 2, passing s3://my-bucket/output as the input.  When Stage 2 runs, 
S3Guard ensures it sees the correct listing of files within 
s3://my-bucket/output, but does nothing to ensure it reads the correct version 
of each of these files if the output happened to vary in any way between the 
first and second execution of Stage 1.

One way to avoid this is to suggest that a re-run of a pipeline stage should 
always use a new output directory, but that is not always practical.

> S3Guard: use object version to protect against inconsistent read after 
> replace/overwrite
> 
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Ben Roling
>Priority: Major
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-29 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755283#comment-16755283
 ] 

Masatake Iwasaki edited comment on HADOOP-14178 at 1/29/19 6:50 PM:


{quote}
The change is derived from a bug fix of Mockito rather than compatibility.
{quote}
Thanks for the explanation. Updated code looks correct based on the logic of 
KillAMPreemptionPolicy. +1.


was (Author: iwasakims):
> The change is derived from a bug fix of Mockito rather than compatibility.

Thanks for the explanation. Updated code looks correct based on the logic of 
KillAMPreemptionPolicy. +1.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch, HADOOP-14178.030.patch, 
> HADOOP-14178.031.patch, HADOOP-14178.032.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15387) Produce a shaded hadoop-cloud-storage JAR for applications to use

2019-01-29 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755284#comment-16755284
 ] 

Sean Busbey edited comment on HADOOP-15387 at 1/29/19 6:55 PM:
---

[~ste...@apache.org] can you help me on understanding scope here a bit?

I think what the description says is we end up with a single jar where all 
classes are in {{org.apache.hadoop}} or {{software.amazon.awssdk}} (or whatever 
their package space is) and we rely on shading to relocate any others (modulo 
the normal caveats on logging / tracing libraries that came up during the 
hadoop-client modules).

Does it need to be all of the Amazon AWS SDK? Is there some interface jar that 
we could use while allowing BYO-SDK? Or for that matter could we just update 
the various cloud storage modules to individually relocate things that aren't 
either hadoop-client-facing or their respective service's SDK?


was (Author: busbey):
[~ste...@apache.org] can you help me on understanding scope here a bit?

I think what the description says is we end up with a single jar where all 
classes are in {{org.apache.hadoop}} or {{software.amazon.awssdk}} and we rely 
on shading to relocate any others (modulo the normal caveats on logging / 
tracing libraries that came up during the hadoop-client modules).

Does it need to be all of the Amazon AWS SDK? Is there some interface jar that 
we could use while allowing BYO-SDK? Or for that matter could we just update 
the various cloud storage modules to individually relocate things that aren't 
either hadoop-client-facing or their respective service's SDK?

> Produce a shaded hadoop-cloud-storage JAR for applications to use
> -
>
> Key: HADOOP-15387
> URL: https://issues.apache.org/jira/browse/HADOOP-15387
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/adl, fs/azure, fs/oss, fs/s3, fs/swift
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Priority: Major
>
> Produce a maven-shaded hadoop-cloudstorage JAR for dowstream use so that
>  * Hadoop dependency choices don't control their decisions
>  * Little/No risk of their JAR changes breaking Hadoop bits they depend on
> This JAR would pull in the shaded hadoop-client JAR, and the aws-sdk-bundle 
> JAR, neither of which would be unshaded (so yes, upgrading aws-sdks would be 
> a bit risky, but double shading a pre-shaded 30MB JAR is excessive on 
> multiple levels.
> Metrics of success: Spark, Tez, Flink etc can pick up and use, and all are 
> happy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-29 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755283#comment-16755283
 ] 

Masatake Iwasaki commented on HADOOP-14178:
---

> The change is derived from a bug fix of Mockito rather than compatibility.

Thanks for the explanation. Updated code looks correct based on the logic of 
KillAMPreemptionPolicy. +1.

> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch, HADOOP-14178.030.patch, 
> HADOOP-14178.031.patch, HADOOP-14178.032.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755264#comment-16755264
 ] 

Steve Loughran commented on HADOOP-16080:
-

bq. No, I am quite busy 

That is the problem we all have, I'm afraid.

bq.  The envisioned hadoop-cloudstorage artifact seems misaligned with the 
communities and dependencies. 

Why so? 

Spark has a declared dependency on the unshaded hadoop-cloud-storage JAR: 
https://github.com/apache/spark/blob/master/hadoop-cloud/pom.xml#L208; so does 
Tez, and some other projects. Having a shaded offering would only need a change 
in those declarations and cover all the stores.

bq. Seems a better structure would be that hadoop-aws is an independent 
artifact that only uses public/stable hadoop APIs. I took a look at 
SemaphoredDelegatingExecutor and noticed that is marked 
InterfaceAudience.Private, so it seems like hadoop-aws should just not use it

SemaphoredDelegatingExecutor actually arrived in hadoop-aws first, 
HADOOP-13560; pulled up into hadoop-common by HADOOP-15309 so that it could be 
shared by the other object stores. It's private *within Hadoop itself*. By 
tagging as such, we retain the option of making incompatible changes. 
Similarly, we keep a lot of implementation stuff in hadoop-common, and share 
test suites of FS behaviours in hadoop-common-tests. That keeps maintenance 
costs down (do I really have to have a copy and paste of 
SemaphoredDelegatingExecutor? What about EtagChecksum? or all the new fs.impl 
stuff I'm adding in HADOOP-15229 for async IO?

bq.  If I magically had the time I would explore making hadoop-aws more 
independent instead of more dependent.

The other aspect of a shaded cloud moduleis that it would also be able to hide 
transitive dependencies. 
You've avoided seeing that problem because you already had SLF4J, commons-*, 
etc on the CP, of compatible versions, and as we've switched to the shaded AWS 
SDK, so you don't have to worry about the jackson and httpclient problems which 
are complex enough that we are going to have to stop making Hadoop 2.7.x 
releases. But hadoop-azure does pass on its unshaded dependencies, as do some 
others -and I do get to deal with those problems. If we can produce a single 
JAR "depend on this and you won't have classpath problems", people will be 
happy. It that which tends to be the most traumatic.

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Keith Turner
>Priority: Major
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755268#comment-16755268
 ] 

Steve Loughran commented on HADOOP-11223:
-

One aspect of config mutability is that if you load a resource through 
{{addDefaultResource}} it can force in new values underneath the iterators.

For real immutability, that is going to have to be locked out. Which isn't a 
bad thing, it just needs to be addressed to stop someone making changes in 
something which is now viewed as immutable

> Offer a read-only conf alternative to new Configuration()
> -
>
> Key: HADOOP-11223
> URL: https://issues.apache.org/jira/browse/HADOOP-11223
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Gopal V
>Assignee: Varun Saxena
>Priority: Major
>  Labels: Performance
> Attachments: HADOOP-11223.001.patch
>
>
> new Configuration() is called from several static blocks across Hadoop.
> This is incredibly inefficient, since each one of those involves primarily 
> XML parsing at a point where the JIT won't be triggered & interpreter mode is 
> essentially forced on the JVM.
> The alternate solution would be to offer a {{Configuration::getDefault()}} 
> alternative which disallows any modifications.
> At the very least, such a method would need to be called from 
> # org.apache.hadoop.io.nativeio.NativeIO::()
> # org.apache.hadoop.security.SecurityUtil::()
> # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider::



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-01-29 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755248#comment-16755248
 ] 

Gabor Bota edited comment on HADOOP-16085 at 1/29/19 6:06 PM:
--

Hi [~ben.roling], 

I think I know about the problem you are talking about. We know about this 
issue, but we try to solve it from another end. 

In a nutshell: We have a When using S3 with S3Guard, ALL of the clients using 
the S3Guarded bucket should use the same Dynamo table. When you don't use 
S3Guard or use the S3 bucket with another dynamo table, and you do 
modifications and no just read that's an {{out of band operation}}. We don't 
support this now.

We created the following Jira for this: HADOOP-15999, and I'm currently working 
on it to fix this - to give {{authoritative mode = true}} additional meaning 
when you MUST use the same dynamo table, but if you set {{authoritative mode = 
false}} S3 will be queried all the time for changes.


was (Author: gabor.bota):
Hi [~ben.roling], 

I think I know about the problem you are talking about. We know about this 
issue, but we try to solve it from another end. 

In a nutshell: We have a When using S3 with S3Guard, ALL of the clients using 
the S3Guarded bucket should use the same Dynamo table. When you don't use 
S3Guard or use the S3 bucket with another dynamo table, and you do 
modifications and no just read that's an "out of band operation". We don't 
support this now.

We created the following Jira for this: HADOOP-15999, and I'm currently working 
on it to fix this.

> S3Guard: use object version to protect against inconsistent read after 
> replace/overwrite
> 
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Ben Roling
>Priority: Major
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-01-29 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755248#comment-16755248
 ] 

Gabor Bota edited comment on HADOOP-16085 at 1/29/19 6:02 PM:
--

Hi [~ben.roling], 

I think I know about the problem you are talking about. We know about this 
issue, but we try to solve it from another end. 

In a nutshell: We have a When using S3 with S3Guard, ALL of the clients using 
the S3Guarded bucket should use the same Dynamo table. When you don't use 
S3Guard or use the S3 bucket with another dynamo table, and you do 
modifications and no just read that's an "out of band operation". We don't 
support this now.

We created the following Jira for this: HADOOP-15999, and I'm currently working 
on it to fix this.


was (Author: gabor.bota):
Hi [~ben.roling], 

I think I know about the problem you are talking. We know about this issue, but 
we try to solve it from another end. 

In a nutshell: We have a When using S3 with S3Guard, ALL of the clients using 
the S3Guarded bucket should use the same Dynamo table. When you don't use 
S3Guard or use the S3 bucket with another dynamo table, and you do 
modifications and no just read that's an "out of band operation". We don't 
support this now.

We created the following Jira for this: HADOOP-15999, and I'm currently working 
on it to fix this.

> S3Guard: use object version to protect against inconsistent read after 
> replace/overwrite
> 
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Ben Roling
>Priority: Major
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-01-29 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755248#comment-16755248
 ] 

Gabor Bota commented on HADOOP-16085:
-

Hi [~ben.roling], 

I think I know about the problem you are talking. We know about this issue, but 
we try to solve it from another end. 

In a nutshell: We have a When using S3 with S3Guard, ALL of the clients using 
the S3Guarded bucket should use the same Dynamo table. When you don't use 
S3Guard or use the S3 bucket with another dynamo table, and you do 
modifications and no just read that's an "out of band operation". We don't 
support this now.

We created the following Jira for this: HADOOP-15999, and I'm currently working 
on it to fix this.

> S3Guard: use object version to protect against inconsistent read after 
> replace/overwrite
> 
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Ben Roling
>Priority: Major
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16087-branch-3.0-002.patch

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch, 
> HADOOP-16087-branch-3.0-002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Attachment: HADOOP-16086-branch-3.1-002.patch

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch, 
> HADOOP-16086-branch-3.1-002.patch
>
>
> Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
> {noformat}
> 02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
> with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
> your configuration for mapreduce.framework.name and the correspond server 
> addresses.)'
> java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
>   at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
>   at 
> org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 

[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16086-branch-3.1-002.patch

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: (was: HADOOP-16086-branch-3.1-002.patch)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Status: Open  (was: Patch Available)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755210#comment-16755210
 ] 

Hadoop QA commented on HADOOP-16087:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-16087 does not apply to branch-3.0. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16087 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956751/HADOOP-16087-branch-3.0-001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15867/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Description: 
Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
{noformat}
02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
your configuration for mapreduce.framework.name and the correspond server 
addresses.)'
java.io.IOException: Cannot initialize Cluster. Please check your configuration 
for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
at 
org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
at 
org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
at 
org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1560)
at 
org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
at 
org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
at 

[jira] [Created] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16087:


 Summary: Backport HADOOP-15549 to branch-3.0
 Key: HADOOP-16087
 URL: https://issues.apache.org/jira/browse/HADOOP-16087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.2
Reporter: Yuming Wang
 Attachments: HADOOP-16087-branch-3.1-001.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Attachment: HADOOP-16086-branch-3.1-001.patch
Status: Patch Available  (was: Open)

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16087-branch-3.0-001.patch
Status: Patch Available  (was: Open)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.0-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: (was: HADOOP-16087-branch-3.1-001.patch)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16087) Backport HADOOP-15549 to branch-3.0

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16087:
-
Attachment: HADOOP-16087-branch-3.1-001.patch
Status: Patch Available  (was: Open)

> Backport HADOOP-15549 to branch-3.0
> ---
>
> Key: HADOOP-16087
> URL: https://issues.apache.org/jira/browse/HADOOP-16087
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16087-branch-3.1-001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755199#comment-16755199
 ] 

Hadoop QA commented on HADOOP-16086:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-16086 does not apply to branch-3.1. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16086 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956749/HADOOP-16086-branch-3.1-001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15866/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch
>
>
> Backport HADOOP-15549 to branch-3.1 to fix IllegalArgumentException:
> {noformat}
> 02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
> with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
> your configuration for mapreduce.framework.name and the correspond server 
> addresses.)'
> java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
>   at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
>   at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
>   at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
>   at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
>   at 
> org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
>   at 
> org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
>   at 
> org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
>   at 
> org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
>   at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at 

[jira] [Updated] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16086:
-
Description: Backport branch-3.1

> Backport HADOOP-15549 to branch-3.1
> ---
>
> Key: HADOOP-16086
> URL: https://issues.apache.org/jira/browse/HADOOP-16086
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16086-branch-3.1-001.patch
>
>
> Backport branch-3.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16086) Backport HADOOP-15549 to branch-3.1

2019-01-29 Thread Yuming Wang (JIRA)
Yuming Wang created HADOOP-16086:


 Summary: Backport HADOOP-15549 to branch-3.1
 Key: HADOOP-16086
 URL: https://issues.apache.org/jira/browse/HADOOP-16086
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.0.2
Reporter: Yuming Wang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16084) Fix the comment for getClass in Configuration

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755173#comment-16755173
 ] 

Hadoop QA commented on HADOOP-16084:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16084 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956661/HDFS-14239.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 361b8eb9c157 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d578d0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15864/testReport/ |
| Max. process+thread count | 1347 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15864/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix the comment for getClass 

[jira] [Comment Edited] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2019-01-29 Thread Keith Turner (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755146#comment-16755146
 ] 

Keith Turner edited comment on HADOOP-16080 at 1/29/19 4:07 PM:


> Do you want to take on the challenge of a hadoop-cloud-storage-shaded 
> artifact?

No, I am quite busy and HADOOP-15387 feels like the wrong direction.  The 
envisioned hadoop-cloudstorage artifact seems misaligned with the communities 
and dependencies.  Seems a better structure would be that hadoop-aws is an 
independent artifact that only uses public/stable hadoop APIs. I took a look at 
SemaphoredDelegatingExecutor and noticed that is marked 
InterfaceAudience.Private, so it seems like hadoop-aws should just not use it.  
However, maybe its not feasible for hadoop-aws to only use public/stable APIs. 
If I magically had the time I would explore making hadoop-aws more independent 
instead of more dependent. 




was (Author: kturner):
> Do you want to take on the challenge of a hadoop-cloud-storage-shaded 
> artifact?

No, I am quite busy and HADOOP-15387 feels like the wrong direction.  The 
envisioned hadoop-cloudstorage artifact seems misaligned with the communities 
and dependencies.  Seems a better structure would be that hadoop-aws is an 
independent artifact that only uses public/stable hadoop APIs. I took a look at 
SemaphoredDelegatingExecutor and noticed that is marked 
InterfaceAudience.Private, so it seems like hadoop-aws should just not use it.  
However, maybe its not feasible for hadoop-aws to only use public/stable APIs. 
If I magically I had the time I would explore making hadoop-aws more 
independent instead of more dependent. 



> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Keith Turner
>Priority: Major
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2019-01-29 Thread Keith Turner (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755146#comment-16755146
 ] 

Keith Turner commented on HADOOP-16080:
---

> Do you want to take on the challenge of a hadoop-cloud-storage-shaded 
> artifact?

No, I am quite busy and HADOOP-15387 feels like the wrong direction.  The 
envisioned hadoop-cloudstorage artifact seems misaligned with the communities 
and dependencies.  Seems a better structure would be that hadoop-aws is an 
independent artifact that only uses public/stable hadoop APIs. I took a look at 
SemaphoredDelegatingExecutor and noticed that is marked 
InterfaceAudience.Private, so it seems like hadoop-aws should just not use it.  
However, maybe its not feasible for hadoop-aws to only use public/stable APIs. 
If I magically I had the time I would explore making hadoop-aws more 
independent instead of more dependent. 



> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Keith Turner
>Priority: Major
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16077) Add an option in ls command to include storage policy

2019-01-29 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16077:
--
Attachment: HADOOP-16077-04.patch

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch, HADOOP-16077-04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755124#comment-16755124
 ] 

Hadoop QA commented on HADOOP-16082:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}255m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestCLI |
|   | hadoop.hdfs.TestClientMetrics |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16082 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956693/HADOOP-16082.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d9da35e51ae6 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d578d0 |
| maven | version: Apache Maven 3.3.9 |
| Default 

[jira] [Created] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-01-29 Thread Ben Roling (JIRA)
Ben Roling created HADOOP-16085:
---

 Summary: S3Guard: use object version to protect against 
inconsistent read after replace/overwrite
 Key: HADOOP-16085
 URL: https://issues.apache.org/jira/browse/HADOOP-16085
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ben Roling


Currently S3Guard doesn't track S3 object versions.  If a file is written in 
S3A with S3Guard and then subsequently overwritten, there is no protection 
against the next reader seeing the old version of the file instead of the new 
one.

It seems like the S3Guard metadata could track the S3 object version.  When a 
file is created or updated, the object version could be written to the S3Guard 
metadata.  When a file is read, the read out of S3 could be performed by object 
version, ensuring the correct version is retrieved.

I don't have a lot of direct experience with this yet, but this is my 
impression from looking through the code.  My organization is looking to shift 
some datasets stored in HDFS over to S3 and is concerned about this potential 
issue as there are some cases in our codebase that would do an overwrite.

I imagine this idea may have been considered before but I couldn't quite track 
down any JIRAs discussing it.  If there is one, feel free to close this with a 
reference to it.

Am I understanding things correctly?  Is this idea feasible?  Any feedback that 
could be provided would be appreciated.  We may consider crafting a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16085) S3Guard: use object version to protect against inconsistent read after replace/overwrite

2019-01-29 Thread Ben Roling (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Roling updated HADOOP-16085:

Component/s: fs/s3

> S3Guard: use object version to protect against inconsistent read after 
> replace/overwrite
> 
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Ben Roling
>Priority: Major
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16084) Fix the comment for getClass in Configuration

2019-01-29 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HADOOP-16084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HADOOP-16084:


  Assignee: (was: Fengnan Li)
Issue Type: Bug  (was: Improvement)
   Key: HADOOP-16084  (was: HDFS-14239)
   Project: Hadoop Common  (was: Hadoop HDFS)

> Fix the comment for getClass in Configuration
> -
>
> Key: HADOOP-16084
> URL: https://issues.apache.org/jira/browse/HADOOP-16084
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Fengnan Li
>Priority: Trivial
> Attachments: HDFS-14239.000.patch
>
>
> The comment for getClass method in org.apache.hadoop.conf.Configuration is 
> wrong, it is using property name instead of the actual class name



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15788) Improve Distcp for long-haul/cloud deployments

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15788:
---

Assignee: (was: Steve Loughran)

> Improve Distcp for long-haul/cloud deployments
> --
>
> Key: HADOOP-15788
> URL: https://issues.apache.org/jira/browse/HADOOP-15788
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Priority: Major
>
> There are a number of outstanding distcp options related to: extensibility, 
> failure reporting/cleanup, long-haul options, cloud performance.
> Hadoop 3.1 added some speedups; follow this up with others. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15549) Upgrade to commons-configuration 2.1 regresses task CPU consumption

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755032#comment-16755032
 ] 

Steve Loughran commented on HADOOP-15549:
-

no reason why not, just needs to go through the patch submission process. 
Create a new JIRA, "Backport HADOOP-15549 to branch-3.1", submit this pr with a 
-branch-3.1-001 suffix and see how it goes.

Sounds like it makes sense for branch-3.0 too

> Upgrade to commons-configuration 2.1 regresses task CPU consumption
> ---
>
> Key: HADOOP-15549
> URL: https://issues.apache.org/jira/browse/HADOOP-15549
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 3.0.2
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: hadoop-15549.txt
>
>
> HADOOP-13660 upgraded from commons-configuration 1.x to 2.x. 
> commons-configuration is used when parsing the metrics configuration 
> properties file. The new builder API used in the new version apparently makes 
> use of a bunch of very bloated reflection and classloading nonsense to 
> achieve the same goal, and this results in a regression of >100ms of CPU time 
> as measured by a program which simply initializes DefaultMetricsSystem.
> This isn't a big deal for long-running daemons, but for MR tasks which might 
> only run a few seconds on poorly-tuned jobs, this can be noticeable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16079) Token.toString faulting if any token listed can't load.

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16755021#comment-16755021
 ] 

Hadoop QA commented on HADOOP-16079:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 92m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16079 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956699/HADOOP-16079-001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 95093fbd210f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d578d0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15863/testReport/ |
| Max. process+thread count | 1415 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15863/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Token.toString faulting if 

[jira] [Updated] (HADOOP-16077) Add an option in ls command to include storage policy

2019-01-29 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-16077:
--
Affects Version/s: 3.3.0
  Component/s: tools

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754967#comment-16754967
 ] 

Steve Loughran commented on HADOOP-16077:
-

what happens if I ask to list both storage and erasure coding policies? 

Ls.java
* L240: Add blank line

FileSystemShell.md
* L248 remove blank line

Other than the need to support the ability to ask for both the -ec and -sp  
options at the same time (with tests!) LGTM

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754979#comment-16754979
 ] 

Steve Loughran commented on HADOOP-16082:
-

going to conflict with HADOOP-16077; whichever goes in first will force the 
second to rebuild.

* needs changes in documentation
* does seem to address the issue in HADOOP-16077: handling of -ec -i

Tests need to include something for -ec and -i, including veryfing that the 
current output of ls -ec *does not change at all*. 

We have to consider the output of the current ls commands to be a public API 
parsed by other commands, 

Now more fundamental issue: FileStatus is stable, marshallable as protobuf, has 
subclasses, and is regularly exchanged over the wire, such as when a client 
calls listFiles() or listStatus() of a remote server.

We cannot add a new field to it unless all existing clients from previous 
versions can handle it being added. HDFS-6984 covers the work the last time 
things changed. 

I would be really nervous about going anywhere near that class if it is just 
for a bit of convenience in listing things.

What about offering some other command to just print out the toString() values 
of getFileStatus and make sure that HdfsFileStatus always prints its fileID, 
which it already sets and marshalls. 




> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16077) Add an option in ls command to include storage policy

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754969#comment-16754969
 ] 

Steve Loughran commented on HADOOP-16077:
-

+please set versions, component, etc.

What's going to happen if you do this against a store with not storage policy. 
local FS is the easiest one to add a test for this

> Add an option in ls command to include storage policy
> -
>
> Key: HADOOP-16077
> URL: https://issues.apache.org/jira/browse/HADOOP-16077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HADOOP-16077-01.patch, HADOOP-16077-02.patch, 
> HADOOP-16077-03.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754962#comment-16754962
 ] 

Steve Loughran commented on HADOOP-16083:
-

So what you are saying is if CRC checking is enabled, (i.e. you dont do an 
update with -skipCrcCheck), it overwrites all files?

Because with the CRC check disabled, I thought it was simpler than that
* files where lengths are different: update
* files where source is missing delete (if some other option is enabled)
* files where source length == dest.length then skip overwrite

Now, if the dest is a filesystem without checksums, the update downgrades to 
assuming you'd requested crc were skipped (this has caused problems with adding 
CRC checks to S3a (HADOOP-13232): all existing workflows and tests broke. 

Which is why we have to be very careful  about any changes here. All workflows, 
including those invoked internally by Hive, called from OOzie, etc work, with 
sources and destinations other than just HDFS -> HDFS. 

h3. If I explicitly copy a file from HDFS to S3a, even without -skipCRCCheck, I 
expect the file to be copied. As happens today.

# you'll have to talk to people who use distcp here. At the very least, this 
must only happen when source and dest are using checksums and the checksums are 
equal.
# The new tests will need to go into AbstractContractDistCpTest



> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x

2019-01-29 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754948#comment-16754948
 ] 

Akira Ajisaka commented on HADOOP-14178:


Thanks [~iwasakims] for reviewing this. I really appreciate that.
bq. Is changing argument of times relevant to Mockit compatibility?
The change is derived from a bug fix of Mockito rather than compatibility.
{code}
@@ -85,9 +85,9 @@ public void testKillAMPreemptPolicy() {
 policy.init(mActxt);
 pM = getPreemptionMessage(true, false, container);
 policy.preempt(mPctxt, pM);
-verify(mActxt.getEventHandler(), times(2)).handle(
+verify(mActxt.getEventHandler(), times(1)).handle(
 any(TaskAttemptEvent.class));
-verify(mActxt.getEventHandler(), times(2)).handle(
+verify(mActxt.getEventHandler(), times(1)).handle(
 any(JobCounterUpdateEvent.class));
{code}
Without upgrading Mockito, the above change causes test failure, and the error 
message is as follows:
{noformat}
[ERROR] Failures: 
[ERROR]   TestKillAMPreemptionPolicy.testKillAMPreemptPolicy:88 
eventHandler.handle();
Wanted 1 time:
-> at 
org.apache.hadoop.mapreduce.v2.app.TestKillAMPreemptionPolicy.testKillAMPreemptPolicy(TestKillAMPreemptionPolicy.java:88)
But was 2 times. Undesired invocation:
-> at 
org.apache.hadoop.mapreduce.v2.app.rm.preemption.KillAMPreemptionPolicy.killContainer(KillAMPreemptionPolicy.java:84)
{noformat}
In TestKillAMPreemptionPolicy L88, the test case want to count the invocation 
of {{handle(TaskAttemptEvent.class)}}, however, KillAMPreemptionPolicy.java L84 
calls {{handle(JobCounterUpdateEvent.class)}} and it is wrongly counted by the 
older version of Mockito.


> Move Mockito up to version 2.x
> --
>
> Key: HADOOP-14178
> URL: https://issues.apache.org/jira/browse/HADOOP-14178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, 
> HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, 
> HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, 
> HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, 
> HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, 
> HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, 
> HADOOP-14178.010.patch, HADOOP-14178.011.patch, HADOOP-14178.012.patch, 
> HADOOP-14178.013.patch, HADOOP-14178.014.patch, HADOOP-14178.015.patch, 
> HADOOP-14178.016.patch, HADOOP-14178.017.patch, HADOOP-14178.018.patch, 
> HADOOP-14178.019.patch, HADOOP-14178.020.patch, HADOOP-14178.021.patch, 
> HADOOP-14178.022.patch, HADOOP-14178.023.patch, HADOOP-14178.024.patch, 
> HADOOP-14178.025.patch, HADOOP-14178.026.patch, HADOOP-14178.027.patch, 
> HADOOP-14178.028.patch, HADOOP-14178.029.patch, HADOOP-14178.030.patch, 
> HADOOP-14178.031.patch, HADOOP-14178.032.patch
>
>
> I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 
> since the switch to maven in 2011. 
> Mockito is now at version 2.1, [with lots of Java 8 
> support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. 
> That' s not just defining actions as closures, but in supporting Optional 
> types, mocking methods in interfaces, etc. 
> It's only used for testing, and, *provided there aren't regressions*, cost of 
> upgrade is low. The good news: test tools usually come with good test 
> coverage. The bad: mockito does go deep into java bytecodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754943#comment-16754943
 ] 

Steve Loughran commented on HADOOP-16080:
-

keith. I appreciate your concerns. What we really want is an object store 
dependency which contains all the store connectors with shaded dependencies, 
HADOOP-15387. 

Nobody has volunteered to do this -yet. 

You've actually started this, which makes this a more viable proposition than 
HADOOP-15387. 

Do you want to take on the challenge of a hadoop-cloud-storage-shaded artifact? 
I think we'll all have to help with the testing (summary: it'll be hard) but it 
will be appreciated by all those downstream projects.

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Keith Turner
>Priority: Major
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16079) Token.toString faulting if any token listed can't load.

2019-01-29 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754927#comment-16754927
 ] 

Steve Loughran commented on HADOOP-16079:
-

patch 001 adds the LinkageError. Does not add any more tests I'm afraid; will 
do more manual verification

> Token.toString faulting if any token listed can't load.
> ---
>
> Key: HADOOP-16079
> URL: https://issues.apache.org/jira/browse/HADOOP-16079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.2, 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16079-001.patch
>
>
> The patch in HADOOP-15808 turns out not to be enough; Token.toString() fails 
> if any token in the service lists isn't known.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16079) Token.toString faulting if any token listed can't load.

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16079:

Status: Patch Available  (was: Open)

> Token.toString faulting if any token listed can't load.
> ---
>
> Key: HADOOP-16079
> URL: https://issues.apache.org/jira/browse/HADOOP-16079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.2, 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16079-001.patch
>
>
> The patch in HADOOP-15808 turns out not to be enough; Token.toString() fails 
> if any token in the service lists isn't known.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16079) Token.toString faulting if any token listed can't load.

2019-01-29 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16079:

Attachment: HADOOP-16079-001.patch

> Token.toString faulting if any token listed can't load.
> ---
>
> Key: HADOOP-16079
> URL: https://issues.apache.org/jira/browse/HADOOP-16079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.2, 3.2.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16079-001.patch
>
>
> The patch in HADOOP-15808 turns out not to be enough; Token.toString() fails 
> if any token in the service lists isn't known.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16081) DistCp: Update "Update and Overwrite" doc

2019-01-29 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16081:

Description: 
https://hadoop.apache.org/docs/r3.1.1/hadoop-distcp/DistCp.html#Update_and_Overwrite

-In the current doc, it says that -update or -overwrite won't copy the 
directory hierarchies. i.e. the file structure will be "flattened out" on the 
destination. But this has been improved already. (Need to find the jira id that 
made this change.) The dir structure WILL be copied over when -update or 
-overwrite option is in use.- I misunderstood the doc. I will try to improve 
the expression then.

The main caveat for -update or -overwrite option is when we are specifying 
multiple sources, there shouldn't be files or directories with same relative 
path.

  was:
https://hadoop.apache.org/docs/r3.1.1/hadoop-distcp/DistCp.html#Update_and_Overwrite

In the current doc, it says that -update or -overwrite won't copy the directory 
hierarchies. i.e. the file structure will be "flattened out" on the 
destination. But this has been improved already. (Need to find the jira id that 
made this change.) The dir structure WILL be copied over when -update or 
-overwrite option is in use.

Now the only caveat for -update or -overwrite option is when we are specifying 
multiple sources, there shouldn't be files or directories with same relative 
path.


> DistCp: Update "Update and Overwrite" doc
> -
>
> Key: HADOOP-16081
> URL: https://issues.apache.org/jira/browse/HADOOP-16081
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, tools/distcp
>Affects Versions: 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> https://hadoop.apache.org/docs/r3.1.1/hadoop-distcp/DistCp.html#Update_and_Overwrite
> -In the current doc, it says that -update or -overwrite won't copy the 
> directory hierarchies. i.e. the file structure will be "flattened out" on the 
> destination. But this has been improved already. (Need to find the jira id 
> that made this change.) The dir structure WILL be copied over when -update or 
> -overwrite option is in use.- I misunderstood the doc. I will try to improve 
> the expression then.
> The main caveat for -update or -overwrite option is when we are specifying 
> multiple sources, there shouldn't be files or directories with same relative 
> path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-01-29 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754881#comment-16754881
 ] 

Hadoop QA commented on HADOOP-16083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 47s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.mapred.TestCopyMapper |
|   | hadoop.tools.mapred.TestCopyMapperCompositeCrc |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956683/HADOOP-16083.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 9f23c8242fb5 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d578d0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15861/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15861/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-distcp U: 

[jira] [Comment Edited] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754878#comment-16754878
 ] 

Siyao Meng edited comment on HADOOP-16082 at 1/29/19 11:03 AM:
---

Uploaded patch rev 001 for trunk. Implemented for HDFS. Added unit test 
TestDFSShell#testLsInodeId() for HDFS. To test it, 
{code:bash|title=hdfs:// supported}
$ hdfs dfs -ls -i /
Found 2 items
16386 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1
16388 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4

$ hdfs dfs -ls -i -R /
16386 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1
16387 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1/d2
16390 -rw-r--r--   2 user1 supergroup117 2019-01-29 02:56 /d1/d2/f3
16388 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4
16389 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4/d5
{code}

Doesn't support file:// at the moment, will always print inode id 0:
{code:bash|title=file:// not supported yet}
$ hdfs dfs -ls -i file:///usr/
Found 9 items
0 drwxr-xr-x   - root wheel288 2016-09-26 22:11 file:///usr/X11
0 drwxr-xr-x   - root wheel288 2016-09-26 22:11 file:///usr/X11R6
0 drwxr-xr-x   - root wheel  31104 2019-01-22 12:42 file:///usr/bin
0 drwxr-xr-x   - root wheel   9728 2019-01-22 12:42 file:///usr/lib
0 drwxr-xr-x   - root wheel   7968 2019-01-22 12:42 file:///usr/libexec
0 drwxr-xr-x   - root wheel512 2018-11-06 16:01 file:///usr/local
0 drwxr-xr-x   - root wheel   7648 2019-01-22 12:42 file:///usr/sbin
0 drwxr-xr-x   - root wheel   1472 2018-10-19 22:12 file:///usr/share
0 drwxr-xr-x   - root wheel160 2018-09-20 21:06 file:///usr/standalone
{code}


was (Author: smeng):
Uploaded patch rev 001 for trunk. Implemented for HDFS. Added unit test 
TestDFSShell#testLsInodeId() for HDFS. To test it, 
{code:bash|title=hdfs:// supported}
$ hdfs dfs -ls -i /
Found 2 items
16386 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1
16388 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4

$ hdfs dfs -ls -i -R /
16386 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1
16387 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1/d2
16390 -rw-r--r--   2 user1 supergroup117 2019-01-29 02:56 /d1/d2/f3
16388 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4
16389 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4/d5
{code}

Doesn't support file:// at the moment, will always print 0 for fileId / inode 
id at the moment:
{code:bash|title=file:// not supported yet}
$ hdfs dfs -ls -i file:///usr/
Found 9 items
0 drwxr-xr-x   - root wheel288 2016-09-26 22:11 file:///usr/X11
0 drwxr-xr-x   - root wheel288 2016-09-26 22:11 file:///usr/X11R6
0 drwxr-xr-x   - root wheel  31104 2019-01-22 12:42 file:///usr/bin
0 drwxr-xr-x   - root wheel   9728 2019-01-22 12:42 file:///usr/lib
0 drwxr-xr-x   - root wheel   7968 2019-01-22 12:42 file:///usr/libexec
0 drwxr-xr-x   - root wheel512 2018-11-06 16:01 file:///usr/local
0 drwxr-xr-x   - root wheel   7648 2019-01-22 12:42 file:///usr/sbin
0 drwxr-xr-x   - root wheel   1472 2018-10-19 22:12 file:///usr/share
0 drwxr-xr-x   - root wheel160 2018-09-20 21:06 file:///usr/standalone
{code}

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Updated] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-01-29 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16082:

Affects Version/s: 3.2.0
   Attachment: HADOOP-16082.001.patch
 Target Version/s: 3.3.0, 3.2.1, 3.1.3
   Status: Patch Available  (was: Open)

Uploaded patch rev 001 for trunk. Implemented for HDFS. Added unit test 
TestDFSShell#testLsInodeId() for HDFS. To test it, 
{code:bash|title=hdfs:// supported}
$ hdfs dfs -ls -i /
Found 2 items
16386 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1
16388 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4

$ hdfs dfs -ls -i -R /
16386 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1
16387 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d1/d2
16390 -rw-r--r--   2 user1 supergroup117 2019-01-29 02:56 /d1/d2/f3
16388 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4
16389 drwxr-xr-x   - user1 supergroup  0 2019-01-29 02:56 /d4/d5
{code}

Doesn't support file:// at the moment, will always print 0 for fileId / inode 
id at the moment:
{code:bash|title=file:// not supported yet}
$ hdfs dfs -ls -i file:///usr/
Found 9 items
0 drwxr-xr-x   - root wheel288 2016-09-26 22:11 file:///usr/X11
0 drwxr-xr-x   - root wheel288 2016-09-26 22:11 file:///usr/X11R6
0 drwxr-xr-x   - root wheel  31104 2019-01-22 12:42 file:///usr/bin
0 drwxr-xr-x   - root wheel   9728 2019-01-22 12:42 file:///usr/lib
0 drwxr-xr-x   - root wheel   7968 2019-01-22 12:42 file:///usr/libexec
0 drwxr-xr-x   - root wheel512 2018-11-06 16:01 file:///usr/local
0 drwxr-xr-x   - root wheel   7648 2019-01-22 12:42 file:///usr/sbin
0 drwxr-xr-x   - root wheel   1472 2018-10-19 22:12 file:///usr/share
0 drwxr-xr-x   - root wheel160 2018-09-20 21:06 file:///usr/standalone
{code}

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.1, 3.2.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15549) Upgrade to commons-configuration 2.1 regresses task CPU consumption

2019-01-29 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16754869#comment-16754869
 ] 

Yuming Wang commented on HADOOP-15549:
--

Cloud we backport this patch to {{branch-3.1}}? I hint 
{{IllegalArgumentException}}:
{noformat}
02:44:34.707 ERROR org.apache.hadoop.hive.ql.exec.Task: Job Submission failed 
with exception 'java.io.IOException(Cannot initialize Cluster. Please check 
your configuration for mapreduce.framework.name and the correspond server 
addresses.)'
java.io.IOException: Cannot initialize Cluster. Please check your configuration 
for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:116)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:109)
at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:102)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:475)
at org.apache.hadoop.mapred.JobClient.(JobClient.java:454)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:369)
at 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$runHive$1(HiveClientImpl.scala:730)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:283)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:221)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:220)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:266)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runHive(HiveClientImpl.scala:719)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.runSqlHive(HiveClientImpl.scala:709)
at 
org.apache.spark.sql.hive.StatisticsSuite.createNonPartitionedTable(StatisticsSuite.scala:719)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$2(StatisticsSuite.scala:822)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable(SQLTestUtils.scala:284)
at 
org.apache.spark.sql.test.SQLTestUtilsBase.withTable$(SQLTestUtils.scala:283)
at 
org.apache.spark.sql.StatisticsCollectionTestBase.withTable(StatisticsCollectionTestBase.scala:40)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1(StatisticsSuite.scala:821)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$testAlterTableProperties$1$adapted(StatisticsSuite.scala:820)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
org.apache.spark.sql.hive.StatisticsSuite.testAlterTableProperties(StatisticsSuite.scala:820)
at 
org.apache.spark.sql.hive.StatisticsSuite.$anonfun$new$70(StatisticsSuite.scala:851)
at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:104)
at 
org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
at org.scalatest.FunSuite.runTest(FunSuite.scala:1560)
at 
org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
at 
org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
at 

[jira] [Updated] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-01-29 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16083:

Attachment: HADOOP-16083.001.patch
Status: Patch Available  (was: Open)

> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.1.1, 3.2.0, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-01-29 Thread Siyao Meng (JIRA)
Siyao Meng created HADOOP-16083:
---

 Summary: DistCp shouldn't always overwrite the target file when 
checksums match
 Key: HADOOP-16083
 URL: https://issues.apache.org/jira/browse/HADOOP-16083
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 3.1.1, 3.2.0, 3.3.0
Reporter: Siyao Meng
Assignee: Siyao Meng


{code:java|title=CopyMapper#setup}
...
try {
  overWrite = overWrite || targetFS.getFileStatus(targetFinalPath).isFile();
} catch (FileNotFoundException ignored) {
}
...
{code}

The above code overrides config key "overWrite" to "true" when the target path 
is a file. Therefore, unnecessary transfer happens when the source and target 
file have the same checksums.

My suggestion is: remove the code above. If the user insists to overwrite, just 
add -overwrite in the options:
{code:bash|title=DistCp command with -overwrite option}
hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
hdfs://localhost:64464/target/5/6.txt
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org