[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-11 Thread shimingfei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418363#comment-15418363
 ] 

shimingfei commented on HADOOP-13483:
-

[~uncleGen] there are several warnings from findbugs, which should also be 
addressed.
and for the test case warning, it can also be fixed like HADOOP-13188 by 
removing testOverwriteEmptyDiretory in TestOSSContractCreate 

> file-create should throw error rather than overwrite directories
> 
>
> Key: HADOOP-13483
> URL: https://issues.apache.org/jira/browse/HADOOP-13483
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13483-HADOOP-12756.002.patch, 
> HADOOP-13483.001.patch
>
>
> similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13490) create-release should not fail rat checks unless --asfrelease is used

2016-08-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13490:
--
Summary: create-release should not fail rat checks unless --asfrelease is 
used  (was: create-release should not fail unless --asfrelease is used)

> create-release should not fail rat checks unless --asfrelease is used
> -
>
> Key: HADOOP-13490
> URL: https://issues.apache.org/jira/browse/HADOOP-13490
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>
> If someone is using create-release to build where the ASF isn't the target 
> destination, there isn't much reason to fail out if RAT errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13490) create-release should not fail unless --asfrelease is used

2016-08-11 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-13490:
-

 Summary: create-release should not fail unless --asfrelease is used
 Key: HADOOP-13490
 URL: https://issues.apache.org/jira/browse/HADOOP-13490
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Allen Wittenauer


If someone is using create-release to build where the ASF isn't the target 
destination, there isn't much reason to fail out if RAT errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath

2016-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418300#comment-15418300
 ] 

Hudson commented on HADOOP-13410:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10266 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10266/])
HADOOP-13410. RunJar adds the content of the jar twice to the classpath (sjlee: 
rev 4d3ea92f4fe4be2a0ee9849c65cb1c91b0c5711b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java


> RunJar adds the content of the jar twice to the classpath
> -
>
> Key: HADOOP-13410
> URL: https://issues.apache.org/jira/browse/HADOOP-13410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Yuanbo Liu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13410.001.patch
>
>
> Today when you run a "hadoop jar" command, the jar is unzipped to a temporary 
> location and gets added to the classloader.
> However, the original jar itself is still added to the classpath.
> {code}
>   List classPath = new ArrayList<>();
>   classPath.add(new File(workDir + "/").toURI().toURL());
>   classPath.add(file.toURI().toURL());
>   classPath.add(new File(workDir, "classes/").toURI().toURL());
>   File[] libs = new File(workDir, "lib").listFiles();
>   if (libs != null) {
> for (File lib : libs) {
>   classPath.add(lib.toURI().toURL());
> }
>   }
> {code}
> As a result, the contents of the jar are present in the classpath *twice* and 
> are completely redundant. Although this does not necessarily cause 
> correctness issues, some stricter code written to require a single presence 
> of files may fail.
> I cannot think of a good reason why the jar should be added to the classpath 
> if the unjarred content was added to it. I think we should remove the jar 
> from the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath

2016-08-11 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418284#comment-15418284
 ] 

Yuanbo Liu commented on HADOOP-13410:
-

[~sjlee0] Thanks for your committing and [~raviprak] thanks for your review.

> RunJar adds the content of the jar twice to the classpath
> -
>
> Key: HADOOP-13410
> URL: https://issues.apache.org/jira/browse/HADOOP-13410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Yuanbo Liu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13410.001.patch
>
>
> Today when you run a "hadoop jar" command, the jar is unzipped to a temporary 
> location and gets added to the classloader.
> However, the original jar itself is still added to the classpath.
> {code}
>   List classPath = new ArrayList<>();
>   classPath.add(new File(workDir + "/").toURI().toURL());
>   classPath.add(file.toURI().toURL());
>   classPath.add(new File(workDir, "classes/").toURI().toURL());
>   File[] libs = new File(workDir, "lib").listFiles();
>   if (libs != null) {
> for (File lib : libs) {
>   classPath.add(lib.toURI().toURL());
> }
>   }
> {code}
> As a result, the contents of the jar are present in the classpath *twice* and 
> are completely redundant. Although this does not necessarily cause 
> correctness issues, some stricter code written to require a single presence 
> of files may fail.
> I cannot think of a good reason why the jar should be added to the classpath 
> if the unjarred content was added to it. I think we should remove the jar 
> from the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath

2016-08-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-13410:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   Status: Resolved  (was: Patch Available)

Committed it to trunk. Thanks [~yuanbo] for your contribution!

> RunJar adds the content of the jar twice to the classpath
> -
>
> Key: HADOOP-13410
> URL: https://issues.apache.org/jira/browse/HADOOP-13410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Yuanbo Liu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13410.001.patch
>
>
> Today when you run a "hadoop jar" command, the jar is unzipped to a temporary 
> location and gets added to the classloader.
> However, the original jar itself is still added to the classpath.
> {code}
>   List classPath = new ArrayList<>();
>   classPath.add(new File(workDir + "/").toURI().toURL());
>   classPath.add(file.toURI().toURL());
>   classPath.add(new File(workDir, "classes/").toURI().toURL());
>   File[] libs = new File(workDir, "lib").listFiles();
>   if (libs != null) {
> for (File lib : libs) {
>   classPath.add(lib.toURI().toURL());
> }
>   }
> {code}
> As a result, the contents of the jar are present in the classpath *twice* and 
> are completely redundant. Although this does not necessarily cause 
> correctness issues, some stricter code written to require a single presence 
> of files may fail.
> I cannot think of a good reason why the jar should be added to the classpath 
> if the unjarred content was added to it. I think we should remove the jar 
> from the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13441) Document LdapGroupsMapping keystore password properties

2016-08-11 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418265#comment-15418265
 ] 

Yuanbo Liu commented on HADOOP-13441:
-

Thanks very much [~jojochuang]

> Document LdapGroupsMapping keystore password properties
> ---
>
> Key: HADOOP-13441
> URL: https://issues.apache.org/jira/browse/HADOOP-13441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
>  Labels: documentation
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13441.001.patch, HADOOP-13441.002.patch, 
> HADOOP-13441.003.patch, HADOOP-13441.004.patch, HADOOP-13441.005.patch
>
>
> A few properties are not documented.
> {{hadoop.security.group.mapping.ldap.ssl.keystore.password}}
> This property is used as an alias to get password from credential providers, 
> or, fall back to using the value as password in clear text. There is also a 
> caveat that credential providers can not be a HDFS-based file system, as 
> mentioned in HADOOP-11934, to prevent cyclic dependency issue.
> This should be documented in core-default.xml and GroupsMapping.md
> {{hadoop.security.credential.clear-text-fallback}}
> This property controls whether or not to fall back to storing credential 
> password as cleartext.
> This should be documented in core-default.xml.
> {{hadoop.security.credential.provider.path}}
> This is mentioned in _CredentialProvider API Guide_, but not in 
> core-default.xml
> The "Supported Features" in _CredentialProvider API Guide_ should link back 
> to GroupsMapping.md#LDAP Groups Mapping 
> {{hadoop.security.credstore.java-keystore-provider.password-file}}
> This is the password file to protect credential files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13481) User end documents for Aliyun OSS FileSystem

2016-08-11 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen reassigned HADOOP-13481:
-

Assignee: uncleGen

> User end documents for Aliyun OSS FileSystem
> 
>
> Key: HADOOP-13481
> URL: https://issues.apache.org/jira/browse/HADOOP-13481
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13482) Provide hadoop-aliyun oss configuration documents

2016-08-11 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen reassigned HADOOP-13482:
-

Assignee: uncleGen

> Provide hadoop-aliyun oss configuration documents
> -
>
> Key: HADOOP-13482
> URL: https://issues.apache.org/jira/browse/HADOOP-13482
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
>Priority: Minor
> Fix For: HADOOP-12756
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-12763) S3AFileSystem And Hadoop FsShell Operations

2016-08-11 Thread Shen Yinjie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HADOOP-12763:
-
Comment: was deleted

(was: Hi,Stephen Montgomery.I have one question.does FsShell "-get" "-rm" work 
in your env?)

> S3AFileSystem And Hadoop FsShell Operations
> ---
>
> Key: HADOOP-12763
> URL: https://issues.apache.org/jira/browse/HADOOP-12763
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Stephen Montgomery
>
> Hi,
> I'm looking at the Hadoop S3A Filesystem and FS Shell commands (specifically 
> -ls and -copyFromLocal/Put).
> 1. Create S3 bucket eg test-s3a-bucket.
> 2. List bucket contents using S3A and get an error: 
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3a://test-s3a-bucket/
> 16/02/03 16:31:13 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> ls: `s3a://test-s3a-bucket/': No such file or directory
> 3. List bucket contents using S3N and get no results (fair enough):
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3n://test-s3a-bucket/
> 16/02/03 16:32:41 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 4. Attempt to copy a file from local fs to S3A and get an error (with or 
> without the trailing slash):
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -copyFromLocal /tmp/zz 
> s3a://test-s3a-bucket/
> 16/02/03 16:35:02 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> copyFromLocal: `s3a://test-s3a-bucket/': No such file or directory
> 5. Attempt to copy a file from local fs to S3N and works:
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -copyFromLocal /tmp/zz 
> s3n://test-s3a-bucket/
> 16/02/03 16:36:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' writing to tempfile 
> '/tmp/hadoop-monty/s3/output-9212095517127973121.tmp'
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' closed. Now beginning upload
> 16/02/03 16:36:18 INFO s3native.NativeS3FileSystem: OutputStream for key 
> 'zz._COPYING_' upload complete
> $ hadoop fs -Dfs.s3n.awsAccessKeyId=... -Dfs.s3n.awsSecretAccessKey=... 
> -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... -ls s3a://test-s3a-bucket/
> 16/02/03 16:36:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Found 1 items
> -rw-rw-rw-   1200 2016-02-03 16:36 s3a://test-s3a-bucket/zz
> It seems that basic filesystem operations can't be performed with an 
> empty/new bucket. I have been able to populate buckets with distcp but I 
> wonder if this is because I was copying directories instead of individual 
> files.
> I know that S3A uses AmazonS3 client and S3N uses jet3t so different 
> underlying implementations/potentially different behaviours but I mainly used 
> s3n for illustration purposes (and it looks like it's working as expected).
> Can someone confirm this behaviour. Is it expected?
> Thanks,
> Stephen



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418205#comment-15418205
 ] 

Hadoop QA commented on HADOOP-13489:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823382/HADOOP-13489.v2.patch 
|
| JIRA Issue | HADOOP-13489 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2c3db3692bbc 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 874577a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10236/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10236/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DistCp may incorrectly return success status when the underlying Job failed
> ---
>
> Key: HADOOP-13489
> URL: https://issues.apache.org/jira/browse/HADOOP-13489
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: HADOOP-13489.v1.patch, HADOOP-13489.v2.patch, 
> TestIncrementalBackup-output.txt
>
>
> I was troubleshooting HBASE-14450 

[jira] [Commented] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418196#comment-15418196
 ] 

Hadoop QA commented on HADOOP-13446:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 88 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
56s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} root: The patch generated 0 new + 18 unchanged - 51 
fixed = 18 total (was 69) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
15s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823377/HADOOP-13446-HADOOP-13345.003.patch
 |
| JIRA Issue | HADOOP-13446 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 48ae21d42390 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2016-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-13489:

Attachment: HADOOP-13489.v2.patch

> DistCp may incorrectly return success status when the underlying Job failed
> ---
>
> Key: HADOOP-13489
> URL: https://issues.apache.org/jira/browse/HADOOP-13489
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: HADOOP-13489.v1.patch, HADOOP-13489.v2.patch, 
> TestIncrementalBackup-output.txt
>
>
> I was troubleshooting HBASE-14450 where at the end of BackupdistCp#execute(), 
> distcp job was marked unsuccessful (BackupdistCp is a wrapper of DistCp).
> Yet in IncrementalTableBackupProcedure#incrementalCopy(), the return value 
> from copyService.copy() was 0.
> Here is related code from DistCp:
> {code}
> try {
>   execute();
> } catch (InvalidInputException e) {
>   LOG.error("Invalid input: ", e);
>   return DistCpConstants.INVALID_ARGUMENT;
> } catch (DuplicateFileException e) {
>   LOG.error("Duplicate files in input path: ", e);
>   return DistCpConstants.DUPLICATE_INPUT;
> } catch (AclsNotSupportedException e) {
>   LOG.error("ACLs not supported on at least one file system: ", e);
>   return DistCpConstants.ACLS_NOT_SUPPORTED;
> } catch (XAttrsNotSupportedException e) {
>   LOG.error("XAttrs not supported on at least one file system: ", e);
>   return DistCpConstants.XATTRS_NOT_SUPPORTED;
> } catch (Exception e) {
>   LOG.error("Exception encountered ", e);
>   return DistCpConstants.UNKNOWN_ERROR;
> }
> return DistCpConstants.SUCCESS;
> {code}
> We don't check whether the Job returned by execute() was successful.
> Even if the Job fails, DistCpConstants.SUCCESS is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418180#comment-15418180
 ] 

Kai Sasaki commented on HADOOP-13419:
-

[~iwasakims] Thanks for pointing out. I'll fix another javadoc warning and make 
compiler warning separate.

> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-08-11 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418177#comment-15418177
 ] 

Kai Sasaki commented on HADOOP-13061:
-

[~drankye] Thanks for checking and sorry for misunderstood. 
I'll make sure not have an influence to HDFS side by using 
{{ErasureCodecOptions}}. 

{quote}
And also, would you help clean the following block in CommonConfigurationKeys?
{quote}
Sure, I'll do also.

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath

2016-08-11 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418147#comment-15418147
 ] 

Ravi Prakash commented on HADOOP-13410:
---

Sure! Here's my +1 too for trunk

> RunJar adds the content of the jar twice to the classpath
> -
>
> Key: HADOOP-13410
> URL: https://issues.apache.org/jira/browse/HADOOP-13410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Yuanbo Liu
> Attachments: HADOOP-13410.001.patch
>
>
> Today when you run a "hadoop jar" command, the jar is unzipped to a temporary 
> location and gets added to the classloader.
> However, the original jar itself is still added to the classpath.
> {code}
>   List classPath = new ArrayList<>();
>   classPath.add(new File(workDir + "/").toURI().toURL());
>   classPath.add(file.toURI().toURL());
>   classPath.add(new File(workDir, "classes/").toURI().toURL());
>   File[] libs = new File(workDir, "lib").listFiles();
>   if (libs != null) {
> for (File lib : libs) {
>   classPath.add(lib.toURI().toURL());
> }
>   }
> {code}
> As a result, the contents of the jar are present in the classpath *twice* and 
> are completely redundant. Although this does not necessarily cause 
> correctness issues, some stricter code written to require a single presence 
> of files may fail.
> I cannot think of a good reason why the jar should be added to the classpath 
> if the unjarred content was added to it. I think we should remove the jar 
> from the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418146#comment-15418146
 ] 

Hadoop QA commented on HADOOP-13489:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 10s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
10s{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 11s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823378/HADOOP-13489.v1.patch 
|
| JIRA Issue | HADOOP-13489 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 949ecdd433b0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 874577a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10233/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-distcp.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10233/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-distcp.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10233/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-distcp.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10233/artifact/patchprocess/patch-mvnsite-hadoop-tools_hadoop-distcp.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10233/artifact/patchprocess/patch-findbugs-hadoop-tools_hadoop-distcp.txt
 |
| unit | 

[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418133#comment-15418133
 ] 

Hadoop QA commented on HADOOP-10738:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
4s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823370/HADOOP-10738.v2.patch 
|
| JIRA Issue | HADOOP-10738 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 97eea5269351 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 874577a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10232/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10232/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, 

[jira] [Commented] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418134#comment-15418134
 ] 

Chris Nauroth commented on HADOOP-13447:


Great, sounds like we're coming to consensus on something more like patch 002.  
Let's go ahead with that.

> S3Guard: Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> --
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13410) RunJar adds the content of the jar twice to the classpath

2016-08-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418128#comment-15418128
 ] 

Sangjin Lee commented on HADOOP-13410:
--

I'm +1 with the change. Do let me know if there is feedback or objections to 
the change. I am thinking of committing this only to 3.0.0 (unless anyone needs 
this on 2.x).

> RunJar adds the content of the jar twice to the classpath
> -
>
> Key: HADOOP-13410
> URL: https://issues.apache.org/jira/browse/HADOOP-13410
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Yuanbo Liu
> Attachments: HADOOP-13410.001.patch
>
>
> Today when you run a "hadoop jar" command, the jar is unzipped to a temporary 
> location and gets added to the classloader.
> However, the original jar itself is still added to the classpath.
> {code}
>   List classPath = new ArrayList<>();
>   classPath.add(new File(workDir + "/").toURI().toURL());
>   classPath.add(file.toURI().toURL());
>   classPath.add(new File(workDir, "classes/").toURI().toURL());
>   File[] libs = new File(workDir, "lib").listFiles();
>   if (libs != null) {
> for (File lib : libs) {
>   classPath.add(lib.toURI().toURL());
> }
>   }
> {code}
> As a result, the contents of the jar are present in the classpath *twice* and 
> are completely redundant. Although this does not necessarily cause 
> correctness issues, some stricter code written to require a single presence 
> of files may fail.
> I cannot think of a good reason why the jar should be added to the classpath 
> if the unjarred content was added to it. I think we should remove the jar 
> from the classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2016-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-13489:

Attachment: HADOOP-13489.v1.patch

Tentative patch which checks Job status.

> DistCp may incorrectly return success status when the underlying Job failed
> ---
>
> Key: HADOOP-13489
> URL: https://issues.apache.org/jira/browse/HADOOP-13489
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: HADOOP-13489.v1.patch
>
>
> I was troubleshooting HBASE-14450 where at the end of BackupdistCp#execute(), 
> distcp job was marked unsuccessful (BackupdistCp is a wrapper of DistCp).
> Yet in IncrementalTableBackupProcedure#incrementalCopy(), the return value 
> from copyService.copy() was 0.
> Here is related code from DistCp:
> {code}
> try {
>   execute();
> } catch (InvalidInputException e) {
>   LOG.error("Invalid input: ", e);
>   return DistCpConstants.INVALID_ARGUMENT;
> } catch (DuplicateFileException e) {
>   LOG.error("Duplicate files in input path: ", e);
>   return DistCpConstants.DUPLICATE_INPUT;
> } catch (AclsNotSupportedException e) {
>   LOG.error("ACLs not supported on at least one file system: ", e);
>   return DistCpConstants.ACLS_NOT_SUPPORTED;
> } catch (XAttrsNotSupportedException e) {
>   LOG.error("XAttrs not supported on at least one file system: ", e);
>   return DistCpConstants.XATTRS_NOT_SUPPORTED;
> } catch (Exception e) {
>   LOG.error("Exception encountered ", e);
>   return DistCpConstants.UNKNOWN_ERROR;
> }
> return DistCpConstants.SUCCESS;
> {code}
> We don't check whether the Job returned by execute() was successful.
> Even if the Job fails, DistCpConstants.SUCCESS is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2016-08-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-13489:

Status: Patch Available  (was: Open)

> DistCp may incorrectly return success status when the underlying Job failed
> ---
>
> Key: HADOOP-13489
> URL: https://issues.apache.org/jira/browse/HADOOP-13489
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: HADOOP-13489.v1.patch
>
>
> I was troubleshooting HBASE-14450 where at the end of BackupdistCp#execute(), 
> distcp job was marked unsuccessful (BackupdistCp is a wrapper of DistCp).
> Yet in IncrementalTableBackupProcedure#incrementalCopy(), the return value 
> from copyService.copy() was 0.
> Here is related code from DistCp:
> {code}
> try {
>   execute();
> } catch (InvalidInputException e) {
>   LOG.error("Invalid input: ", e);
>   return DistCpConstants.INVALID_ARGUMENT;
> } catch (DuplicateFileException e) {
>   LOG.error("Duplicate files in input path: ", e);
>   return DistCpConstants.DUPLICATE_INPUT;
> } catch (AclsNotSupportedException e) {
>   LOG.error("ACLs not supported on at least one file system: ", e);
>   return DistCpConstants.ACLS_NOT_SUPPORTED;
> } catch (XAttrsNotSupportedException e) {
>   LOG.error("XAttrs not supported on at least one file system: ", e);
>   return DistCpConstants.XATTRS_NOT_SUPPORTED;
> } catch (Exception e) {
>   LOG.error("Exception encountered ", e);
>   return DistCpConstants.UNKNOWN_ERROR;
> }
> return DistCpConstants.SUCCESS;
> {code}
> We don't check whether the Job returned by execute() was successful.
> Even if the Job fails, DistCpConstants.SUCCESS is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13446) S3Guard: Support running isolated unit tests separate from AWS integration tests.

2016-08-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13446:
---
Attachment: HADOOP-13446-HADOOP-13345.003.patch

Aaron, thank you for the code review.  I am uploading one more patch revision 
(003), taking the opportunity to clean up a bunch of Checkstyle nitpicks since 
I'm touching the files anyway.

> S3Guard: Support running isolated unit tests separate from AWS integration 
> tests.
> -
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-11 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418098#comment-15418098
 ] 

Aaron Fabbri commented on HADOOP-13447:
---

Great discussion, I'm enjoying it (although an hour with a whiteboard would be 
even better).

{quote}
My eventual (no pun intended!) plan was going to be to evolve the interfaces 
and separation of responsibilities for AbstractS3AccessPolicy and S3Store such 
that S3Store never makes its own internal metadata calls (like the internal 
getFileStatus calls you mentioned). 
{quote}

Yeah, I was trying to come up with a way to do this.  My goals are clean / 
minimal code, and near-optimal performance (want upstream s3a to be very fast).

The crux of the problem seems that the top level logic of things like 
mkdirs()/rename() etc. need to call getFileStatus(), and those internal 
getFileStatus() calls are subject to the same source-of-truth and retry 
policies as the public API.  In this case, the only way I can think to really 
separate the top-level logic for FileSystem ops (e.g. mkdir) from the policy on 
MetadataStore, is to build some sort of execution plan in top level logic then 
pass to a policy layer to execute it.  You'd have three layers, roughly, 
FileSystem top-level, Raw S3 I/O, Policy / execution. This seems like it would 
be slower and way over-complicated (some steps in execution are conditional, 
you end up with a pseudo-language).  There is also the AOP approach that s3mper 
took, but I think we can do better since we can modify the upstream code, and 
our goals are a bit more ambitious (more than just listStatus(), plus ability 
to use MetadataStore as source of truth).

{quote}
 I thought it would introduce complicated control flow in a lot of the 
S3AFileSystem methods. However, refactorings like this are always subjective, 
and it's entirely possible that I was wrong. 
{quote}

This is true.  Seems like the classic performance vs. complexity tradeoff.  I 
don't think you are wrong at all, just a question of priorities.  I'm willing 
to sacrifice a little code separation for significant performance benefit, and 
possibly a more complete solution (e.g. if internal getFileStatus() call misses 
a file above mkdirs(path) because that file creation was eventually consistent).

{quote}
If you prefer, we can go that way
{quote}

That is my preference at this stage.  It was not as bad as I thought it would 
be when we prototyped it.  Also I like doing refactoring / generalization after 
seeing some concrete cases in code (i.e. start by complicating top level 
S3AFileSystem logic).

My preference would probably change if I could think of a clean way to handle 
the internal getFileStatus() calls.

{quote} and later revisit the larger split I proposed here in patch 001 if it's 
deemed necessary.{quote}

I think we do need to break up S3AFileSystem.  I'll be very supportive of 
future refactorization here, in general.

{quote}
Really the only portion of patch 001 that I consider an absolute must is the 
S3ClientFactory work. I think it's vital to the project that we have the 
ability to mock S3 interactions to simulate eventual consistency.
{quote}

Yes, I like this part of it.  Being able to mock the S3 service out would be 
awesome, and I didn't notice any real downside.

> S3Guard: Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> --
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13447:
---
Attachment: HADOOP-13447-HADOOP-13446.002.patch

Here is patch v002 to show the smaller possible change that I just discussed 
for comparison.

# Just introduce {{S3ClientFactory}}.
# Add {{TestS3AGetFileStatus}}, which proves that we now can write tests that 
mock the underlying S3 calls.
# Also opportunistically clean up some Checkstyle warnings.

> S3Guard: Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> --
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2016-08-11 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-10738:
--
Target Version/s: 2.9.0  (was: 2.8.0)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2016-08-11 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-10738:
--
Attachment: HADOOP-10738.v2.patch

Upmerging to trunk

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2016-08-11 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-10738:
--
Status: Patch Available  (was: Open)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch, HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2016-08-11 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-10738:
--
Target Version/s: 2.8.0
  Status: Open  (was: Patch Available)

Let's see if we can get this in now.

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738.v1.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13436) RPC connections are leaking due to missing equals override in RetryUtils#getDefaultRetryPolicy

2016-08-11 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418013#comment-15418013
 ] 

Xiaobing Zhou edited comment on HADOOP-13436 at 8/11/16 10:18 PM:
--

Thank you [~daryn] and [~liuml07] for your comments. How about defining a new 
interface ConnectionRetryPolicy that extends RetryPolicy with extension of 
ConnectionRetryPolicy#reuseConnection? ConnectionId#equals should instead 
compare ConnectionRetryPolicy#reuseConnection to conclude if a new connection 
needs be created. Without the specific interface, there are several issues:

1. Exception checks in RetryInvocationHandler level could be tightly coupled 
with those in connection level. It's error prone to mix them together since the 
same RetryPolicy can be used in both levels. A good case in point is 
RetryUtils#getDefaultRetryPolicy. It composed checks of ServiceException and 
IOException, whereas ServiceException is never thrown in connection level. 

2. For the same reason, it's hard to correctly implement retry logic in those  
exceptions dependent retry polices, e.g. ExceptionDependentRetry, 
OtherThanRemoteExceptionDependentRetry and RemoteExceptionDependentRetry. Their 
logic should be separated to two distinct categories.

We need to avoid the jumbo retry policy anyway.

I will start with a couple of sub tasks addressing the issue aforementioned.

Thank you [~jingzhao] very much for your suggestions.


was (Author: xiaobingo):
Thank you [~daryn] and [~liuml07] for your comments. How about defining a new 
interface ConnectionRetryPolicy that extends RetryPolicy with extension of 
ConnectionRetryPolicy#reuseConnection? ConnectionId#equals should instead 
compare ConnectionRetryPolicy#reuseConnection to conclude if a new connection 
needs be created. Without the specific interface, there are several issues:

1. Exception checks in RetryInvocationHandler level could be tightly coupled 
with those in connection level. It's error prone to mix them together since the 
same RetryPolicy can be used in both levels. A good case in point is 
RetryUtils#getDefaultRetryPolicy. It composed checks of ServiceException and 
IOException, whereas ServiceException is never thrown in connection level. 

2. For the same reason, it's hard to correctly implement retry logic in those  
exceptions dependent retry polices, e.g. ExceptionDependentRetry, 
OtherThanRemoteExceptionDependentRetry and RemoteExceptionDependentRetry.

We need to avoid the jumbo retry policy anyway.

I will start with a couple of sub tasks addressing the issue aforementioned.

Thank you [~jingzhao] very much for your suggestions.

> RPC connections are leaking due to missing equals override in 
> RetryUtils#getDefaultRetryPolicy
> --
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId#equals, which composes checking Subclass-of-RetryPolicy#equals. 
> If subclasses of RetryPolicy neglect overriding RetryPolicy#equals, every 
> instances of RetryPolicy with equivalent fields' values (e.g. 
> MultipleLinearRandomRetry[6x1ms, 10x6ms]) will lead to a brand new 
> connection because the check will fall back to Object#equals.
> This is stack trace where RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
>

[jira] [Updated] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-11 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13488:
---
Description: As the most commonly used default or fallback policy, 
TryOnceThenFail is often used both RetryInvocationHandler and connection level. 
As proposed in HADOOP-13436, it should implement ConnectionRetryPolicy.  (was: 
As the most commonly used default or fallback policy, )

> Have TryOnceThenFail implement ConnectionRetryPolicy
> 
>
> Key: HADOOP-13488
> URL: https://issues.apache.org/jira/browse/HADOOP-13488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> As the most commonly used default or fallback policy, TryOnceThenFail is 
> often used both RetryInvocationHandler and connection level. As proposed in 
> HADOOP-13436, it should implement ConnectionRetryPolicy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-11 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HADOOP-13488:
---
Description: As the most commonly used default or fallback policy, 

> Have TryOnceThenFail implement ConnectionRetryPolicy
> 
>
> Key: HADOOP-13488
> URL: https://issues.apache.org/jira/browse/HADOOP-13488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
>
> As the most commonly used default or fallback policy, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13488) Have TryOnceThenFail implement ConnectionRetryPolicy

2016-08-11 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HADOOP-13488:
--

 Summary: Have TryOnceThenFail implement ConnectionRetryPolicy
 Key: HADOOP-13488
 URL: https://issues.apache.org/jira/browse/HADOOP-13488
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13436) RPC connections are leaking due to missing equals override in RetryUtils#getDefaultRetryPolicy

2016-08-11 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418013#comment-15418013
 ] 

Xiaobing Zhou commented on HADOOP-13436:


Thank you [~daryn] and [~liuml07] for your comments. How about defining a new 
interface ConnectionRetryPolicy that extends RetryPolicy with extension of 
ConnectionRetryPolicy#reuseConnection? ConnectionId#equals should instead 
compare ConnectionRetryPolicy#reuseConnection to conclude if a new connection 
needs be created. Without the specific interface, there are several issues:
1. Exception checks in RetryInvocationHandler level could be tightly coupled 
with those in connection level. It's error prone to mix them together since the 
same RetryPolicy can be used in both levels. A good case in point is 
RetryUtils#getDefaultRetryPolicy. It composed checks of ServiceException and 
IOException, whereas ServiceException is never thrown in connection level. 
2. For the same reason, it's hard to correctly implement retry logic in those  
exceptions dependent retry polices, e.g. ExceptionDependentRetry, 
OtherThanRemoteExceptionDependentRetry and RemoteExceptionDependentRetry.

We need to avoid the jumbo retry policy anyway.

I will start with a couple of sub tasks addressing the issue aforementioned.

Thank you [~jingzhao] very much for your suggestions.

> RPC connections are leaking due to missing equals override in 
> RetryUtils#getDefaultRetryPolicy
> --
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId#equals, which composes checking Subclass-of-RetryPolicy#equals. 
> If subclasses of RetryPolicy neglect overriding RetryPolicy#equals, every 
> instances of RetryPolicy with equivalent fields' values (e.g. 
> MultipleLinearRandomRetry[6x1ms, 10x6ms]) will lead to a brand new 
> connection because the check will fall back to Object#equals.
> This is stack trace where RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:114)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.channelRead0(WebHdfsHandler.java:114)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:52)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.URLDispatcher.channelRead0(URLDispatcher.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
> at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
> at 
> 

[jira] [Comment Edited] (HADOOP-13436) RPC connections are leaking due to missing equals override in RetryUtils#getDefaultRetryPolicy

2016-08-11 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418013#comment-15418013
 ] 

Xiaobing Zhou edited comment on HADOOP-13436 at 8/11/16 10:02 PM:
--

Thank you [~daryn] and [~liuml07] for your comments. How about defining a new 
interface ConnectionRetryPolicy that extends RetryPolicy with extension of 
ConnectionRetryPolicy#reuseConnection? ConnectionId#equals should instead 
compare ConnectionRetryPolicy#reuseConnection to conclude if a new connection 
needs be created. Without the specific interface, there are several issues:

1. Exception checks in RetryInvocationHandler level could be tightly coupled 
with those in connection level. It's error prone to mix them together since the 
same RetryPolicy can be used in both levels. A good case in point is 
RetryUtils#getDefaultRetryPolicy. It composed checks of ServiceException and 
IOException, whereas ServiceException is never thrown in connection level. 

2. For the same reason, it's hard to correctly implement retry logic in those  
exceptions dependent retry polices, e.g. ExceptionDependentRetry, 
OtherThanRemoteExceptionDependentRetry and RemoteExceptionDependentRetry.

We need to avoid the jumbo retry policy anyway.

I will start with a couple of sub tasks addressing the issue aforementioned.

Thank you [~jingzhao] very much for your suggestions.


was (Author: xiaobingo):
Thank you [~daryn] and [~liuml07] for your comments. How about defining a new 
interface ConnectionRetryPolicy that extends RetryPolicy with extension of 
ConnectionRetryPolicy#reuseConnection? ConnectionId#equals should instead 
compare ConnectionRetryPolicy#reuseConnection to conclude if a new connection 
needs be created. Without the specific interface, there are several issues:
1. Exception checks in RetryInvocationHandler level could be tightly coupled 
with those in connection level. It's error prone to mix them together since the 
same RetryPolicy can be used in both levels. A good case in point is 
RetryUtils#getDefaultRetryPolicy. It composed checks of ServiceException and 
IOException, whereas ServiceException is never thrown in connection level. 
2. For the same reason, it's hard to correctly implement retry logic in those  
exceptions dependent retry polices, e.g. ExceptionDependentRetry, 
OtherThanRemoteExceptionDependentRetry and RemoteExceptionDependentRetry.

We need to avoid the jumbo retry policy anyway.

I will start with a couple of sub tasks addressing the issue aforementioned.

Thank you [~jingzhao] very much for your suggestions.

> RPC connections are leaking due to missing equals override in 
> RetryUtils#getDefaultRetryPolicy
> --
>
> Key: HADOOP-13436
> URL: https://issues.apache.org/jira/browse/HADOOP-13436
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: repro.sh
>
>
> We've noticed RPC connections are increasing dramatically in a Kerberized 
> HDFS cluster with {noformat}dfs.client.retry.policy.enabled{noformat} 
> enabled. Internally,  Client#getConnection is doing lookup relying on 
> ConnectionId#equals, which composes checking Subclass-of-RetryPolicy#equals. 
> If subclasses of RetryPolicy neglect overriding RetryPolicy#equals, every 
> instances of RetryPolicy with equivalent fields' values (e.g. 
> MultipleLinearRandomRetry[6x1ms, 10x6ms]) will lead to a brand new 
> connection because the check will fall back to Object#equals.
> This is stack trace where RetryUtils#getDefaultRetryPolicy is called:
> {noformat}
> at 
> org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(RetryUtils.java:82)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:409)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:315)
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:678)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:619)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:609)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.newDfsClient(WebHdfsHandler.java:272)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.onOpen(WebHdfsHandler.java:215)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.handle(WebHdfsHandler.java:135)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler$1.run(WebHdfsHandler.java:117)
> at 
> 

[jira] [Updated] (HADOOP-13190) Mention LoadBalancingKMSClientProvider in KMS HA documentation

2016-08-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13190:
-
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8. Thanks [~xiaochen] for review!

> Mention LoadBalancingKMSClientProvider in KMS HA documentation
> --
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch, 
> HADOOP-13190.003.patch, HADOOP-13190.004.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13396) Add json format audit logging to KMS

2016-08-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417955#comment-15417955
 ] 

Wei-Chiu Chuang commented on HADOOP-13396:
--

Hi [~xiaochen] thanks for the updated patch and thanks for clarification. I 
made another round of review, and here's my comment:

* There is a test failure due to the change in audit message.
* user and impersonator could be null string. Would 
JSONGenerator#writeStartObject get an NPE if the string is null? If not, and if 
it prints "null" for null user/null impersonator, it could be hard to tell 
whether it is a user/impersonator named 'null' or a null string.
* SimpleKMSAuditLogger#logAuditEvent:
if the status is UNAUTHORIZED, I think you can also use logAuditSimpleFormat() 
to print audit log as well.
In fact, it looks like you only need special logic when status is OK.

* JsonKMSAuditLogger#logAuditEvent:
I think the same applies to this audit logger class. You should be able to call 
logAuditSimpleFormat(). This will help simplify the logic.
Or if it can't, see if you can extract the lines in each switch-case, because 
they are relatively long.

* The parameter opStatus in private void op(OpStatus opStatus, final KMS.KMSOp 
op, final UserGroupInformation ugi, final String key, final String remoteHost, 
final String extraMsg) { should be declared as final.

* With regard to the default logger, can we stick to the typical Hadoop 
convention where we define 
public static final String KMS_AUDIT_LOGGER_KEY_DEFAULT = “simple";
This simplifies the code in KMSAudit#initializeAuditLoggers().


> Add json format audit logging to KMS
> 
>
> Key: HADOOP-13396
> URL: https://issues.apache.org/jira/browse/HADOOP-13396
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, 
> HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch
>
>
> Currently, KMS audit log is using log4j, to write a text format log.
> We should refactor this, so that people can easily add new format audit logs. 
> The current text format log should be the default, and all of its behavior 
> should remain compatible.
> A json format log extension is added using the refactored API, and being 
> turned off by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13190) Mention LoadBalancingKMSClientProvider in KMS HA documentation

2016-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417814#comment-15417814
 ] 

Hudson commented on HADOOP-13190:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10264 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10264/])
HADOOP-13190. Mention LoadBalancingKMSClientProvider in KMS HA (weichiu: rev 
db719ef125b11b01eab3353e2dc4b48992bf88d5)
* hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm


> Mention LoadBalancingKMSClientProvider in KMS HA documentation
> --
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch, 
> HADOOP-13190.003.patch, HADOOP-13190.004.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13190) Mention LoadBalancingKMSClientProvider in KMS HA documentation

2016-08-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13190:
-
Summary: Mention LoadBalancingKMSClientProvider in KMS HA documentation  
(was: LoadBalancingKMSClientProvider should be documented)

> Mention LoadBalancingKMSClientProvider in KMS HA documentation
> --
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch, 
> HADOOP-13190.003.patch, HADOOP-13190.004.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented

2016-08-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417791#comment-15417791
 ] 

Wei-Chiu Chuang commented on HADOOP-13190:
--

Committing this.

> LoadBalancingKMSClientProvider should be documented
> ---
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch, 
> HADOOP-13190.003.patch, HADOOP-13190.004.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13441) Document LdapGroupsMapping keystore password properties

2016-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417786#comment-15417786
 ] 

Hudson commented on HADOOP-13441:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10263 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10263/])
HADOOP-13441. Document LdapGroupsMapping keystore password properties. 
(weichiu: rev d892ae9576d07d01927443b6dc6c934a6c2f317f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProviderFactory.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/src/site/markdown/GroupsMapping.md
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/AbstractJavaKeyStoreProvider.java


> Document LdapGroupsMapping keystore password properties
> ---
>
> Key: HADOOP-13441
> URL: https://issues.apache.org/jira/browse/HADOOP-13441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
>  Labels: documentation
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13441.001.patch, HADOOP-13441.002.patch, 
> HADOOP-13441.003.patch, HADOOP-13441.004.patch, HADOOP-13441.005.patch
>
>
> A few properties are not documented.
> {{hadoop.security.group.mapping.ldap.ssl.keystore.password}}
> This property is used as an alias to get password from credential providers, 
> or, fall back to using the value as password in clear text. There is also a 
> caveat that credential providers can not be a HDFS-based file system, as 
> mentioned in HADOOP-11934, to prevent cyclic dependency issue.
> This should be documented in core-default.xml and GroupsMapping.md
> {{hadoop.security.credential.clear-text-fallback}}
> This property controls whether or not to fall back to storing credential 
> password as cleartext.
> This should be documented in core-default.xml.
> {{hadoop.security.credential.provider.path}}
> This is mentioned in _CredentialProvider API Guide_, but not in 
> core-default.xml
> The "Supported Features" in _CredentialProvider API Guide_ should link back 
> to GroupsMapping.md#LDAP Groups Mapping 
> {{hadoop.security.credstore.java-keystore-provider.password-file}}
> This is the password file to protect credential files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13447) S3Guard: Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-08-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417767#comment-15417767
 ] 

Chris Nauroth commented on HADOOP-13447:


[~fabbri], thank you for looking.  Great questions again.

bq. The downsides to this, as we mentioned in the design doc, is that the 
s3a-internal calls like getFileStatus() cannot utilize the MetadataStore.

My eventual (no pun intended!) plan was going to be to evolve the interfaces 
and separation of responsibilities for {{AbstractS3AccessPolicy}} and 
{{S3Store}} such that {{S3Store}} never makes its own internal metadata calls 
(like the internal {{getFileStatus}} calls you mentioned).  I didn't take that 
leap in this patch, because it was already quite large.

Taking the example of {{create}}, this might look like the policy layer 
fetching the {{FileStatus}} and then passing it down to {{S3Store#create}}, so 
that {{S3Store#create}} doesn't have to do an internal S3 call to fetch it.  
For the "direct" policy, that would just be a sequence of 
{{S3Store#getFileStatus}} + {{S3Store#create}}.  For a caching policy, that 
could be a metadata store fetch instead.

For some operations, there are greater challenges related to lazy fetch vs. 
eager fetch for these internal metadata operations.  Considering {{rename}}, 
there are multiple (I think 3 now?) {{getFileStatus}} calls possible, but 
fetching them all eagerly in the policy would harm performance if it turns out 
{{S3Store#rename}} doesn't really need to use them all.  Working out a lazy 
fetch strategy will bring some additional complexity into the code, so that's a 
risk.

bq. I feel like a cleaner mapping to the problem is to have the client 
(S3AFileSystem) contain a MetadataStore and/or some sort of policy object which 
specifies behavior.

I considered this, but I discarded it, because I thought it would introduce 
complicated control flow in a lot of the {{S3AFileSystem}} methods.  However, 
refactorings like this are always subjective, and it's entirely possible that I 
was wrong.  If you prefer, we can go that way and later revisit the larger 
split I proposed here in patch 001 if it's deemed necessary.  I'm happy to get 
us rolling either way.  Let me know your thoughts.

Really the only portion of patch 001 that I consider an absolute must is the 
{{S3ClientFactory}} work.  I think it's vital to the project that we have the 
ability to mock S3 interactions to simulate eventual consistency.

> S3Guard: Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> --
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-13447-HADOOP-13446.001.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13441) Document LdapGroupsMapping keystore password properties

2016-08-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13441:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed it to trunk and branch-2. Thanks [~yuanbo] for contributing the patch!

> Document LdapGroupsMapping keystore password properties
> ---
>
> Key: HADOOP-13441
> URL: https://issues.apache.org/jira/browse/HADOOP-13441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
>  Labels: documentation
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13441.001.patch, HADOOP-13441.002.patch, 
> HADOOP-13441.003.patch, HADOOP-13441.004.patch, HADOOP-13441.005.patch
>
>
> A few properties are not documented.
> {{hadoop.security.group.mapping.ldap.ssl.keystore.password}}
> This property is used as an alias to get password from credential providers, 
> or, fall back to using the value as password in clear text. There is also a 
> caveat that credential providers can not be a HDFS-based file system, as 
> mentioned in HADOOP-11934, to prevent cyclic dependency issue.
> This should be documented in core-default.xml and GroupsMapping.md
> {{hadoop.security.credential.clear-text-fallback}}
> This property controls whether or not to fall back to storing credential 
> password as cleartext.
> This should be documented in core-default.xml.
> {{hadoop.security.credential.provider.path}}
> This is mentioned in _CredentialProvider API Guide_, but not in 
> core-default.xml
> The "Supported Features" in _CredentialProvider API Guide_ should link back 
> to GroupsMapping.md#LDAP Groups Mapping 
> {{hadoop.security.credstore.java-keystore-provider.password-file}}
> This is the password file to protect credential files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13487) Hadoop KMS doesn't clean up old delegation tokens stored in Zookeeper

2016-08-11 Thread Alex Ivanov (JIRA)
Alex Ivanov created HADOOP-13487:


 Summary: Hadoop KMS doesn't clean up old delegation tokens stored 
in Zookeeper
 Key: HADOOP-13487
 URL: https://issues.apache.org/jira/browse/HADOOP-13487
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.6.0
Reporter: Alex Ivanov


Configuration:
CDH 5.5.1 (Hadoop 2.6+)
KMS configured to store delegation tokens in Zookeeper
DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties

Findings:
It seems to me delegation tokens never get cleaned up from Zookeeper past their 
renewal date. I can see in the logs that the removal thread is started with the 
expected interval:
{code}
2016-08-11 08:15:24,511 INFO  AbstractDelegationTokenSecretManager - Starting 
expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
{code}
However, I don't see any delegation token removals, indicated by the following 
log message:
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager --> 
removeStoredToken(TokenIdent ident), line 769 [CDH]
{code}
if (LOG.isDebugEnabled()) {
  LOG.debug("Removing ZKDTSMDelegationToken_"
  + ident.getSequenceNumber());
}
{code}
Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't get 
cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13486) Method invocation in log can be replaced by variable because the variable's toString method contain more info

2016-08-11 Thread Nemo Chen (JIRA)
Nemo Chen created HADOOP-13486:
--

 Summary: Method invocation in log can be replaced by variable 
because the variable's toString method contain more info 
 Key: HADOOP-13486
 URL: https://issues.apache.org/jira/browse/HADOOP-13486
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.2
Reporter: Nemo Chen


Similar to the fix in HADOOP-6419, in file:

hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java

{code}
Connection c = (Connection)key.attachment();
...
LOG.info(Thread.currentThread().getName() + ": readAndProcess from client " + 
c.getHostAddress() + " threw exception [" + e + "]", (e instanceof 
WrappedRpcServerException) ? null : e);
...
{code}

in class Connection, the toString method contains both getHostAddress() and 
remotePort
{code}
public String toString() {
  return getHostAddress() + ":" + remotePort; 
}
{code}

Therefore the c.getHostAddress() should be replaced by c for simplicity and 
information wise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13441) Document LdapGroupsMapping keystore password properties

2016-08-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417607#comment-15417607
 ] 

Wei-Chiu Chuang commented on HADOOP-13441:
--

+1. The test failure is unrelated. I'll file a new jira to fix the failed test 
if it's not filed already.

> Document LdapGroupsMapping keystore password properties
> ---
>
> Key: HADOOP-13441
> URL: https://issues.apache.org/jira/browse/HADOOP-13441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-13441.001.patch, HADOOP-13441.002.patch, 
> HADOOP-13441.003.patch, HADOOP-13441.004.patch, HADOOP-13441.005.patch
>
>
> A few properties are not documented.
> {{hadoop.security.group.mapping.ldap.ssl.keystore.password}}
> This property is used as an alias to get password from credential providers, 
> or, fall back to using the value as password in clear text. There is also a 
> caveat that credential providers can not be a HDFS-based file system, as 
> mentioned in HADOOP-11934, to prevent cyclic dependency issue.
> This should be documented in core-default.xml and GroupsMapping.md
> {{hadoop.security.credential.clear-text-fallback}}
> This property controls whether or not to fall back to storing credential 
> password as cleartext.
> This should be documented in core-default.xml.
> {{hadoop.security.credential.provider.path}}
> This is mentioned in _CredentialProvider API Guide_, but not in 
> core-default.xml
> The "Supported Features" in _CredentialProvider API Guide_ should link back 
> to GroupsMapping.md#LDAP Groups Mapping 
> {{hadoop.security.credstore.java-keystore-provider.password-file}}
> This is the password file to protect credential files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13264) Hadoop HDFS - DFSOutputStream close method fails to clean up resources in case no hdfs datanodes are accessible

2016-08-11 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HADOOP-13264.

Resolution: Duplicate

I'm closing this as a dup of HDFS-10549, since [~linyiqun] is working on there 
and the change is in HDFS.

Thanks [~sebyonthenet] and all for the work here, let's follow up on HDFS-10549.

> Hadoop HDFS - DFSOutputStream close method fails to clean up resources in 
> case no hdfs datanodes are accessible 
> 
>
> Key: HADOOP-13264
> URL: https://issues.apache.org/jira/browse/HADOOP-13264
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Seb Mo
>
> Using:
> hadoop-hdfs\2.7.2\hadoop-hdfs-2.7.2-sources.jar!\org\apache\hadoop\hdfs\DFSOutputStream.java
> Close method fails when the client can't connect to any data nodes. When 
> re-using the same DistributedFileSystem in the same JVM, if all the datanodes 
> can't be accessed, then this causes a memory leak as the 
> DFSClient#filesBeingWritten map is never cleared after that.
> See test program provided by [~sebyonthenet] in comments below.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13484) Log refactoring: method invocation should be replaced by variable

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417519#comment-15417519
 ] 

Hadoop QA commented on HADOOP-13484:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823297/HADOOP-13484.001.patch
 |
| JIRA Issue | HADOOP-13484 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b520930b6d93 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8fbb57f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10230/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10230/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Log refactoring: method invocation should be replaced by variable
> -
>
> Key: HADOOP-13484
> URL: https://issues.apache.org/jira/browse/HADOOP-13484
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>Assignee: Vrushali C
> Attachments: HADOOP-13484.001.patch
>
>

[jira] [Updated] (HADOOP-13449) S3Guard: Implement DynamoDBMetadataStore.

2016-08-11 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13449:
---
Assignee: Mingliang Liu

Hello [~liuml07].  Thank you for volunteering to help.  I am tentatively 
assigning this one to you.  I think HADOOP-13448 will be a pre-requisite 
(defining the interface).

> S3Guard: Implement DynamoDBMetadataStore.
> -
>
> Key: HADOOP-13449
> URL: https://issues.apache.org/jira/browse/HADOOP-13449
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Mingliang Liu
>
> Provide an implementation of the metadata store backed by DynamoDB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13485) Log refactoring: method invocation should be replaced by variable in hadoop tools

2016-08-11 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HADOOP-13485:
---
Labels: easy  (was: )

> Log refactoring: method invocation should be replaced by variable in hadoop 
> tools
> -
>
> Key: HADOOP-13485
> URL: https://issues.apache.org/jira/browse/HADOOP-13485
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>  Labels: easy
> Attachments: HADOOP-13485.001.patch
>
>
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
> {code:borderStyle=solid}
> this.blockSize = conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME,
> MAX_AZURE_BLOCK_SIZE);
> if (LOG.isDebugEnabled()) {
> LOG.debug("NativeAzureFileSystem. Initializing.");
> LOG.debug("  blockSize  = "
>   + conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, 
> MAX_AZURE_BLOCK_SIZE));
> }
> {code}
> For simplicity and readability, the 
> {{conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, MAX_AZURE_BLOCK_SIZE))}} 
> should be changed to {{this.blockSize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13485) Log refactoring: method invocation should be replaced by variable in hadoop tools

2016-08-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated HADOOP-13485:

Attachment: HADOOP-13485.001.patch

Uploading patch v1.

> Log refactoring: method invocation should be replaced by variable in hadoop 
> tools
> -
>
> Key: HADOOP-13485
> URL: https://issues.apache.org/jira/browse/HADOOP-13485
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
> Attachments: HADOOP-13485.001.patch
>
>
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
> {code:borderStyle=solid}
> this.blockSize = conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME,
> MAX_AZURE_BLOCK_SIZE);
> if (LOG.isDebugEnabled()) {
> LOG.debug("NativeAzureFileSystem. Initializing.");
> LOG.debug("  blockSize  = "
>   + conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, 
> MAX_AZURE_BLOCK_SIZE));
> }
> {code}
> For simplicity and readability, the 
> {{conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, MAX_AZURE_BLOCK_SIZE))}} 
> should be changed to {{this.blockSize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13485) Log refactoring: method invocation should be replaced by variable in hadoop tools

2016-08-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated HADOOP-13485:

Status: Patch Available  (was: Open)

> Log refactoring: method invocation should be replaced by variable in hadoop 
> tools
> -
>
> Key: HADOOP-13485
> URL: https://issues.apache.org/jira/browse/HADOOP-13485
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
> Attachments: HADOOP-13485.001.patch
>
>
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
> {code:borderStyle=solid}
> this.blockSize = conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME,
> MAX_AZURE_BLOCK_SIZE);
> if (LOG.isDebugEnabled()) {
> LOG.debug("NativeAzureFileSystem. Initializing.");
> LOG.debug("  blockSize  = "
>   + conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, 
> MAX_AZURE_BLOCK_SIZE));
> }
> {code}
> For simplicity and readability, the 
> {{conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, MAX_AZURE_BLOCK_SIZE))}} 
> should be changed to {{this.blockSize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13484) Log refactoring: method invocation should be replaced by variable

2016-08-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated HADOOP-13484:

Status: Patch Available  (was: Open)

> Log refactoring: method invocation should be replaced by variable
> -
>
> Key: HADOOP-13484
> URL: https://issues.apache.org/jira/browse/HADOOP-13484
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
> Attachments: HADOOP-13484.001.patch
>
>
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableRates.java
> In code block:
> {code:borderStyle=solid}
> String name = method.getName();
> LOG.debug(name);
> try { registry.newRate(name, name, false, true); }
> catch (Exception e) {
> LOG.error("Error creating rate metrics for "+ method.getName(), e);
> }
> {code}
> method.getName() is better to be replaced by variable name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13484) Log refactoring: method invocation should be replaced by variable

2016-08-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned HADOOP-13484:
---

Assignee: Vrushali C

> Log refactoring: method invocation should be replaced by variable
> -
>
> Key: HADOOP-13484
> URL: https://issues.apache.org/jira/browse/HADOOP-13484
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>Assignee: Vrushali C
> Attachments: HADOOP-13484.001.patch
>
>
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableRates.java
> In code block:
> {code:borderStyle=solid}
> String name = method.getName();
> LOG.debug(name);
> try { registry.newRate(name, name, false, true); }
> catch (Exception e) {
> LOG.error("Error creating rate metrics for "+ method.getName(), e);
> }
> {code}
> method.getName() is better to be replaced by variable name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13484) Log refactoring: method invocation should be replaced by variable

2016-08-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated HADOOP-13484:

Attachment: HADOOP-13484.001.patch

Uploading patch v1

> Log refactoring: method invocation should be replaced by variable
> -
>
> Key: HADOOP-13484
> URL: https://issues.apache.org/jira/browse/HADOOP-13484
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
> Attachments: HADOOP-13484.001.patch
>
>
> Similar to the fix for HDFS-409. In file:
> hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableRates.java
> In code block:
> {code:borderStyle=solid}
> String name = method.getName();
> LOG.debug(name);
> try { registry.newRate(name, name, false, true); }
> catch (Exception e) {
> LOG.error("Error creating rate metrics for "+ method.getName(), e);
> }
> {code}
> method.getName() is better to be replaced by variable name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13485) Log refactoring: method invocation should be replaced by variable in hadoop tools

2016-08-11 Thread Nemo Chen (JIRA)
Nemo Chen created HADOOP-13485:
--

 Summary: Log refactoring: method invocation should be replaced by 
variable in hadoop tools
 Key: HADOOP-13485
 URL: https://issues.apache.org/jira/browse/HADOOP-13485
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.2
Reporter: Nemo Chen


Similar to the fix for HDFS-409. In file:

hadoop-rel-release-2.7.2/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
{code:borderStyle=solid}
this.blockSize = conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME,
MAX_AZURE_BLOCK_SIZE);
if (LOG.isDebugEnabled()) {
LOG.debug("NativeAzureFileSystem. Initializing.");
LOG.debug("  blockSize  = "
  + conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, MAX_AZURE_BLOCK_SIZE));
}
{code}
For simplicity and readability, the 
{{conf.getLong(AZURE_BLOCK_SIZE_PROPERTY_NAME, MAX_AZURE_BLOCK_SIZE))}} should 
be changed to {{this.blockSize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13484) Log refactoring: method invocation should be replaced by variable

2016-08-11 Thread Nemo Chen (JIRA)
Nemo Chen created HADOOP-13484:
--

 Summary: Log refactoring: method invocation should be replaced by 
variable
 Key: HADOOP-13484
 URL: https://issues.apache.org/jira/browse/HADOOP-13484
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.7.2
Reporter: Nemo Chen


Similar to the fix for HDFS-409. In file:

hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableRates.java

In code block:
{code:borderStyle=solid}
String name = method.getName();
LOG.debug(name);
try { registry.newRate(name, name, false, true); }
catch (Exception e) {
LOG.error("Error creating rate metrics for "+ method.getName(), e);
}
{code}
method.getName() is better to be replaced by variable name.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13336) support cross-region operations in S3a

2016-08-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417101#comment-15417101
 ] 

Steve Loughran commented on HADOOP-13336:
-

There's another config strategy, where we configure an endpoint and then assign 
buckets to them by way of domains
{code}
fs.s3a.endpoint.eu.address=frankfurt.s3.aws.com
fs.s3a.endpoint.eu.awsid=AWSID
fs.s3a.endpoint.eu.secret=AWSSECRET
{code}

Then you'd refer to a bucket by bucket.endpoint: {{s3a://stevel.frankfurt}}

This is how openstack is configured. Where it might cause problems is for 
s3-compatible installations in use today, where  an FQDN is used

> support cross-region operations in S3a
> --
>
> Key: HADOOP-13336
> URL: https://issues.apache.org/jira/browse/HADOOP-13336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> S3a now supports different regions, by way of declaring the endpoint —but you 
> can't do things like read in one region, write back in another (e.g. a distcp 
> backup), because only one region can be specified in a configuration.
> If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
> s3a://b2.seol , then this would be possible. 
> Swift does this with a full filesystem binding/config: endpoints, username, 
> etc, in the XML file. Would we need to do that much? It'd be simpler 
> initially to use a domain suffix of a URL to set the region of a bucket from 
> the domain and have the aws library sort the details out itself, maybe with 
> some config options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417099#comment-15417099
 ] 

Hadoop QA commented on HADOOP-13483:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m  9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823218/HADOOP-13483-HADOOP-12756.002.patch
 |
| JIRA Issue | HADOOP-13483 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9d13db55440c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 8346f922 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10228/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10228/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10228/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10228/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417080#comment-15417080
 ] 

Hadoop QA commented on HADOOP-13483:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
43s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 3s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-tools/hadoop-aliyun in HADOOP-12756 has 8 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} HADOOP-12756 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823213/HADOOP-13483-HADOOP-12756.001.patch
 |
| JIRA Issue | HADOOP-13483 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5a13889f304c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-12756 / 8346f922 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10227/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aliyun-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10227/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10227/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10227/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-11 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13483:
--
Attachment: HADOOP-13483-HADOOP-12756.002.patch

> file-create should throw error rather than overwrite directories
> 
>
> Key: HADOOP-13483
> URL: https://issues.apache.org/jira/browse/HADOOP-13483
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13483-HADOOP-12756.002.patch, 
> HADOOP-13483.001.patch
>
>
> similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-11 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13483:
--
Attachment: (was: HADOOP-13483-HADOOP-12756.001.patch)

> file-create should throw error rather than overwrite directories
> 
>
> Key: HADOOP-13483
> URL: https://issues.apache.org/jira/browse/HADOOP-13483
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13483-HADOOP-12756.002.patch, 
> HADOOP-13483.001.patch
>
>
> similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-08-11 Thread uncleGen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

uncleGen updated HADOOP-13483:
--
Attachment: HADOOP-13483-HADOOP-12756.001.patch

> file-create should throw error rather than overwrite directories
> 
>
> Key: HADOOP-13483
> URL: https://issues.apache.org/jira/browse/HADOOP-13483
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: uncleGen
>Assignee: uncleGen
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13483-HADOOP-12756.001.patch, 
> HADOOP-13483.001.patch
>
>
> similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416692#comment-15416692
 ] 

Hudson commented on HADOOP-11588:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10262 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10262/])
HADOOP-11588. Benchmark framework and test for erasure coders. (kai.zheng: rev 
8fbb57fbd903a838684fa87cf15767d13695e4ed)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/TestRawErasureCoderBenchmark.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawDecoderLegacy.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/erasurecode/rawcoder/RawErasureCoderBenchmark.java


> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch, HADOOP-11588.8.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416689#comment-15416689
 ] 

Rui Li commented on HADOOP-11588:
-

Thanks [~drankye] for the review :)

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch, HADOOP-11588.8.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11588:
---
Hadoop Flags: Reviewed

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch, HADOOP-11588.8.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11588:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

The latest patch LGTM and +1. Have committed to 3.0.0-alpha1 and trunk 
branches. Thanks [~lirui] for the contribution!

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch, HADOOP-11588.8.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-11 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416657#comment-15416657
 ] 

Masatake Iwasaki commented on HADOOP-13419:
---

bq. HADOOP-13369 is a JIRA for fixing mainly javadoc warnings.

Yeah but you are fixing compiler warning in addition to javadoc warnings in 
001. It would be consistent to file another JIRA for compiler warnings and fix 
them in there.


> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13419) Fix javadoc warnings by JDK8 in hadoop-common package

2016-08-11 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416655#comment-15416655
 ] 

Masatake Iwasaki commented on HADOOP-13419:
---

Thanks for working on this, [~lewuathe].

I got 3 javadoc warnings on today's trunk. Can you fix warnings about package 
comments too?

{noformat}
[WARNING] javadoc: warning - Multiple sources of package comments found for 
package "org.apache.hadoop.io.retry"
[WARNING] javadoc: warning - Multiple sources of package comments found for 
package "org.apache.hadoop.ipc"
[WARNING] 
/home/iwasakims/srcs/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java:2720:
 warning - @param argument "src" is not a parameter name.
{noformat}


> Fix javadoc warnings by JDK8 in hadoop-common package
> -
>
> Key: HADOOP-13419
> URL: https://issues.apache.org/jira/browse/HADOOP-13419
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: HADOOP-13419.01.patch
>
>
> Fix compile warning generated after migrate JDK8.
> This is a subtask of HADOOP-13369.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-11 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416649#comment-15416649
 ] 

Rui Li commented on HADOOP-11588:
-

Test failure is not related and cannot be reproduced locally.

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch, HADOOP-11588.8.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15416642#comment-15416642
 ] 

Hadoop QA commented on HADOOP-11588:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 51s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12823175/HADOOP-11588.8.patch |
| JIRA Issue | HADOOP-11588 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 14c62d283a55 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a428d4f |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10226/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10226/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10226/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
>