[jira] [Assigned] (HADOOP-13618) IllegalArgumentException when accessing Swift object with name containing space character

2016-10-10 Thread Yulei Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yulei Li reassigned HADOOP-13618:
-

Assignee: Yulei Li

> IllegalArgumentException when accessing Swift object with name containing 
> space character
> -
>
> Key: HADOOP-13618
> URL: https://issues.apache.org/jira/browse/HADOOP-13618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
>Assignee: Yulei Li
> Attachments: avro_test.zip
>
>
> We are using Spark and hadoop-openstack-2.6.0.jar 
> (compile('org.apache.hadoop:hadoop-openstack:2.6.0')) to access Oracle 
> Storage Service which is Swift-based:
> DataFrame df = 
> hiveCtx.read().format("com.databricks.spark.csv").option(...).load(objectName);
> When accessing a Swift URL like "swift://Linda.oracleswift/non-matching 
> records.csv" where the object name "non-matching records.csv" contains a 
> space character, the following exception is thrown:
> 2016-08-23 15:56:03 DEBUG SwiftNativeFileSystem:126 - SwiftFileSystem 
> initialized
> java.lang.IllegalArgumentException: Illegal character in path at index 13: 
> /non-matching records.csv
> at java.net.URI.create(URI.java:859)
> at 
> org.apache.hadoop.fs.swift.util.SwiftObjectPath.(SwiftObjectPath.java:59)
> at 
> org.apache.hadoop.fs.swift.util.SwiftObjectPath.fromPath(SwiftObjectPath.java:183)
> at 
> org.apache.hadoop.fs.swift.util.SwiftObjectPath.fromPath(SwiftObjectPath.java:145)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.toObjectPath(SwiftNativeFileSystemStore.java:434)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:211)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:181)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.getFileStatus(SwiftNativeFileSystem.java:173)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
> at org.apache.hadoop.fs.Globber.doGlob(Globber.java:272)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:151)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1653)
> at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
> ...
> Apparently it is complaining about the space character. However, checking the 
> debug messages earlier before this error is raised we can see:
> 2016-08-23 15:56:03 DEBUG SwiftNativeFileSystem:122 - Initializing 
> SwiftNativeFileSystem against URI 
> swift://Linda.oracleswift/non-matching%20records.csv and working dir 
> swift://Linda.oracleswift/user/syang
> 2016-08-23 15:56:03 DEBUG RestClientBindings:141 - Filesystem 
> swift://Linda.oracleswift/non-matching%20records.csv is using configuration 
> keys fs.swift.service.oracleswift
> ...
> The space character has already been encoded into "%20" and so it seems the 
> Swift URL enters into SwiftNativeFileSystem is properly encoded.
> Because of this error any Swift object with file name contains space 
> character (and may be slash '/' character as well?) cannot be accessed.
> As an additional data point, if we first encode the object name("non-matching 
> records.csv"=>"non-matching%20records.csv") before giving it to OpenStack 
> Swift API, a different error is raised. This time somehow the path separator 
> '/' after the container name 'Linda' got encoded by 
> SwiftNativeFileSystemStore:
> 2016-08-23 10:56:41 DEBUG SwiftRestClient:1731 - Status code = 400
> 2016-08-23 10:56:41 DEBUG SwiftRestClient:1445 - Method HEAD on 
> https://storage.oraclecorp.com/v1/Storage-dfisher/Linda%2Fnon-matching%20records.csv
>  failed, status code: 400, status line: HTTP/1.1 400 Bad Request
> BadRequest: Bad request against 
> https://storage.oraclecorp.com/v1/Storage-dfisher/Linda%2Fnon-matching%20records.csv
>  HEAD 
> https://storage.oraclecorp.com/v1/Storage-dfisher/Linda%2Fnon-matching%20records.csv
>  => 400
> at 
> org.apache.hadoop.fs.swift.http.SwiftRestClient.buildException(SwiftRestClient.java:1456)
> at 
> org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1403)
> at 
> org.apache.hadoop.fs.swift.http.SwiftRestClient.headRequest(SwiftRestClient.java:1016)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.stat(SwiftNativeFileSystemStore.java:257)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:212)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:181)
> at 
> 

[jira] [Commented] (HADOOP-13486) Method invocation in log can be replaced by variable because the variable's toString method contain more info

2016-10-10 Thread Yulei Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564289#comment-15564289
 ] 

Yulei Li commented on HADOOP-13486:
---

Is it necessary to contain the port? The port of client may change when 
establishing a new connection, so I don't think we should change this.

> Method invocation in log can be replaced by variable because the variable's 
> toString method contain more info 
> --
>
> Key: HADOOP-13486
> URL: https://issues.apache.org/jira/browse/HADOOP-13486
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
>
> Similar to the fix in HADOOP-6419, in file:
> hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
> {code}
> Connection c = (Connection)key.attachment();
> ...
> LOG.info(Thread.currentThread().getName() + ": readAndProcess from client " + 
> c.getHostAddress() + " threw exception [" + e + "]", (e instanceof 
> WrappedRpcServerException) ? null : e);
> ...
> {code}
> in class Connection, the toString method contains both getHostAddress() and 
> remotePort
> {code}
> public String toString() {
>   return getHostAddress() + ":" + remotePort; 
> }
> {code}
> Therefore the c.getHostAddress() should be replaced by c for simplicity and 
> information wise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564255#comment-15564255
 ] 

Hadoop QA commented on HADOOP-13502:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
6s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} root: The patch generated 0 new + 19 unchanged - 1 
fixed = 19 total (was 20) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m  0s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
29s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}199m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
|   | hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13502 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832572/HADOOP-13502-trunk.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux bc4d72ba58d4 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b1266 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (HADOOP-13702) Add a new instrumented read-write lock

2016-10-10 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564228#comment-15564228
 ] 

Jingcheng Du commented on HADOOP-13702:
---

Thanks for the comments [~xiaochen]!

{{InstrumentedLock}} which is also in branch-2.8 is now in HDFS, I moved the 
{{InstrumentedReadLock}} and {{InstrumentedWriteLock}} back to hdfs to align 
them with {{InstrumentedLock}} together. I can move the read-write lock to 
COMMON in the next patch.
It is true to extend {{InstrumentedLock}} can avoid the code duplicated, but 
the read-write lock code cannot be in COMMON anymore since {{InstrumentedLock}} 
is in HDFS.
And is it more straightforward if we use a 
InstrumentedReadLock/InstrumentedWriteLock as ReadLock/WriteLock, and use 
readLock/writeLock methods in InstrumentedReadWriteLock to get them? :)
Please advise. Thanks a lot.

> Add a new instrumented read-write lock
> --
>
> Key: HADOOP-13702
> URL: https://issues.apache.org/jira/browse/HADOOP-13702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-10924-2.patch, HDFS-10924-3.patch, 
> HDFS-10924-4.patch, HDFS-10924-5.patch, HDFS-10924.patch
>
>
> Add a new instrumented read-write lock in hadoop common, so that the 
> HDFS-9668 can use this to improve the locking in FsDatasetImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564104#comment-15564104
 ] 

Hadoop QA commented on HADOOP-13705:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 46s{color} 
| {color:red} root generated 1 new + 708 unchanged - 0 fixed = 709 total (was 
708) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 81 unchanged - 0 fixed = 84 total (was 81) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832577/HADOOP-13705.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 96c0620a0add 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b1266 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10725/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10725/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10725/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10725/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10725/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This 

[jira] [Commented] (HADOOP-13700) Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15564094#comment-15564094
 ] 

Hadoop QA commented on HADOOP-13700:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 4 unchanged - 1 fixed = 5 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13700 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832578/HADOOP-13700.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e18fbb5f933 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 96b1266 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10726/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10726/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10726/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10726/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.




[jira] [Updated] (HADOOP-13700) Remove unthrown IOException from TrashPolicy#initialize and #getInstance signatures

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13700:
-
Summary: Remove unthrown IOException from TrashPolicy#initialize and 
#getInstance signatures  (was: Incompatible changes in TrashPolicy )

> Remove unthrown IOException from TrashPolicy#initialize and #getInstance 
> signatures
> ---
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13700:
-
Fix Version/s: (was: 3.0.0-alpha2)
Affects Version/s: 2.8.0
 Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 3.0.0-alpha2)
   Status: Patch Available  (was: Open)

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1, 2.8.0
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13700:
-
Attachment: HADOOP-13700.001.patch

Trivial patch attached to remove "throws IOException" from the new methods in 
TrashPolicy, since they don't throw IOException.

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13700.001.patch
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13705:
-
Attachment: HADOOP-13705.001.patch

Patch attached. Clean revert.

> Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13705
> URL: https://issues.apache.org/jira/browse/HADOOP-13705
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13705.001.patch
>
>
> Per discussion on HADOOP-13700, I'd like to revert HADOOP-13534. It removes a 
> deprecated API, but the 2.x line does not have a release with the new 
> replacement API. This places a burden on downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13705:
-
Status: Patch Available  (was: Open)

> Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13705
> URL: https://issues.apache.org/jira/browse/HADOOP-13705
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HADOOP-13705.001.patch
>
>
> Per discussion on HADOOP-13700, I'd like to revert HADOOP-13534. It removes a 
> deprecated API, but the 2.x line does not have a release with the new 
> replacement API. This places a burden on downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13705) Revert HADOOP-13534 Remove unused TrashPolicy#getInstance and initialize code

2016-10-10 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-13705:


 Summary: Revert HADOOP-13534 Remove unused TrashPolicy#getInstance 
and initialize code
 Key: HADOOP-13705
 URL: https://issues.apache.org/jira/browse/HADOOP-13705
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang
Assignee: Andrew Wang


Per discussion on HADOOP-13700, I'd like to revert HADOOP-13534. It removes a 
deprecated API, but the 2.x line does not have a release with the new 
replacement API. This places a burden on downstream applications.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563950#comment-15563950
 ] 

Andrew Wang commented on HADOOP-13700:
--

Regarding auditing incompatible changes, I revved the JACC patch at 
HADOOP-13583 recently, and would appreciate a review to get that in. Running 
the tool and looking at the output would also be helpful.

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563943#comment-15563943
 ] 

Andrew Wang commented on HADOOP-13700:
--

Thanks for the discussion Steve, Allen. Here's my proposal:

* We revert HADOOP-13534. I can open a new JIRA to do this for changelog 
purposes, since it's been released in 3.0.0-alpha1.
* We remove the "throws IOException" from the initialize and getInstance 
methods added in HDFS-8831. This is similar to HDFS-9799. This also gets 
backported to branch-2 and branch-2.8; HDFS-8831 hasn't made it into a release 
yet, so this seems safe.

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initialize code

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13534:
-
Fix Version/s: 3.0.0-alpha1

> Remove unused TrashPolicy#getInstance and initialize code
> -
>
> Key: HADOOP-13534
> URL: https://issues.apache.org/jira/browse/HADOOP-13534
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Zhe Zhang
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-13534.002.patch, HDFS-9785.001.patch
>
>
> A follow-on from HDFS-8831: now the {{getInstance}} and {{initialize}} APIs 
> with Path is not used anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13502) Rename/split fs.contract.is-blobstore flag used by contract tests.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13502:
---
Attachment: HADOOP-13502-trunk.004.patch

[~ste...@apache.org], have you had a chance to try the hadoop-openstack tests 
again with this patch?  I commented a few days ago saying that I couldn't repro 
the failure that you saw.

Also, I am now attaching a separate patch file for trunk.  The only difference 
is the omission of s3.xml, which does not exist on trunk.

> Rename/split fs.contract.is-blobstore flag used by contract tests.
> --
>
> Key: HADOOP-13502
> URL: https://issues.apache.org/jira/browse/HADOOP-13502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HADOOP-13502-branch-2.001.patch, 
> HADOOP-13502-branch-2.002.patch, HADOOP-13502-branch-2.003.patch, 
> HADOOP-13502-branch-2.004.patch, HADOOP-13502-trunk.004.patch
>
>
> The {{fs.contract.is-blobstore}} flag guards against execution of several 
> contract tests to account for known limitations with blob stores.  However, 
> the name is not entirely accurate, because it's still possible that a file 
> system implemented against a blob store could pass those tests, depending on 
> whether or not the implementation matches the semantics of HDFS.  This issue 
> proposes to rename the flag or split it into different flags with different 
> definitions for the semantics covered by the current flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13309:
---
Status: Patch Available  (was: Open)

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13309:
---
Target Version/s: 2.8.0  (was: 2.9.0)

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563811#comment-15563811
 ] 

ASF GitHub Bot commented on HADOOP-13309:
-

GitHub user cnauroth opened a pull request:

https://github.com/apache/hadoop/pull/138

HADOOP-13309: Document S3A known limitations in file ownership and pe…

…rmission model.

Summary:
* Update file system specification to describe that object stores may have 
a different authorization model than HDFS and traditional file systems.
* Update hadoop-aws documentation to warn that S3A will return stub 
information for metadata related to ownership and permissions.  I wrote this 
information from the assumption that the HADOOP-12774 change gets finished, so 
that one will have to get committed first.
* Also update a few cosmetic things near the part of the hadoop-aws 
document that I changed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnauroth/hadoop-1 HADOOP-13309

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/138.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #138


commit 5c2a05463523e4b101eea08611e036315c1bd63a
Author: Chris Nauroth 
Date:   2016-10-10T22:51:21Z

HADOOP-13309: Document S3A known limitations in file ownership and 
permission model.




> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13609) Refine credential provider related codes for AliyunOss integration

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563798#comment-15563798
 ] 

Hudson commented on HADOOP-13609:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13609. Refine credential provider related codes for AliyunOss 
(kai.zheng: rev 9cd47602576cd01a905e27642b685905a88eee72)
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/TemporaryAliyunCredentialsProvider.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSTemporaryCredentials.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunCredentials.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java


> Refine credential provider related codes for AliyunOss integration
> --
>
> Key: HADOOP-13609
> URL: https://issues.apache.org/jira/browse/HADOOP-13609
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13609-HADOOP-12756.001.patch, 
> HADOOP-13609-HADOOP-12756.002.patch, HADOOP-13609-HADOOP-12756.003.patch, 
> HADOOP-13609-HADOOP-12756.004.patch
>
>
> looking at the AliyunOss integration codes, some findings:
> 1. {{TemporaryAliyunCredentialsProvider}} could be better named;
> 2. TemporaryAliyunCredentialsProvider shared many codes with 
> {{AliyunOSSUtils#getCredentialsProvider}}, and the dup can be resolved;
> 3. {{AliyunOSSUtils#getPassword}} is rather confusing, as used to get other 
> things than password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13529) Do some code refactoring

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563799#comment-15563799
 ] 

Hudson commented on HADOOP-13529:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13529. Do some code refactoring. Contributed by Genmao Yu. (mingfei.shi: 
rev d33e928fbeb1764a724c8f3c051bb0d8be82bbff)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractDispCp.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractGetFileStatus.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractRootDir.java
* (edit) hadoop-tools/hadoop-aliyun/pom.xml
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemContract.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/OSSContract.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java


> Do some code refactoring
> 
>
> Key: HADOOP-13529
> URL: https://issues.apache.org/jira/browse/HADOOP-13529
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>  Labels: reviewed
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13529-HADOOP-12756.001.patch, 
> HADOOP-13529-HADOOP-12756.002.patch, HADOOP-13529-HADOOP-12756.003.patch, 
> HADOOP-13529-HADOOP-12756.004.patch, HADOOP-13529-HADOOP-12756.005.patch
>
>
> 1. argument and variant naming
> 2. abstract utility class
> 3. add some comments
> 4. adjust some configuration
> 5. fix TODO
> 6. remove unnecessary commets
> 7. some bug fix
> {code}
> bug in copyDir
> {code}
> 8. add some unit test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-10-10 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563782#comment-15563782
 ] 

Aaron Fabbri edited comment on HADOOP-13651 at 10/10/16 10:47 PM:
--

Minor status update, since this JIRA has a long gestation period. I'm working 
on this now.  So far I have code for:

- New config values: {{fs.s3a.metadatastore.authoratitive}}, and 
{{fs.s3a.metadatastore.impl}}.
- getFileStatus()
- listStatus()
- rename()
- delete()
- mkdirs()
- copyFromLocalFile()
- copyFile()

What remains for this jira:
- create().  Figuring out the OutputStream plumbing now 
- More testing.

What I'd like to do as separate jiras (because I favor smaller code reviews).
- Delete tracking
- Retries (i.e. eventual consistency retry policy).  Would love to see this in 
isolation since it is non-trivial.

I'm inserting TODO comments as I go at key locations for those two items.

Interesting things about my approach so far:

I'm trying to minimize changes to {{S3AFileSystem}}
   - diff stat so far: {quote}
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
   | 116 ++--
{quote}
   - I introduce a "metadatastore s3a helper/glue" class S3Guard which is a 
bunch of static helper functions, so far.
   - I introduce {{NullMetadataStore}} which is a no-op metadata store.   Goal 
was to simplify S3AFileSystem changes (always call MetadataStore, don't care if 
it is no-op), but I also like that it further clarifies {{MetadataStore}} 
semantics.  Turns out S3AFileSystem still sometimes wants to know if there is 
no MetadataStore to avoid allocating stuff that isn't needed.  Seems like ok 
tradeoff but I'll let folks comment when I post v1 patch.

I'm trying to keep PathMetadata simple:  Either you have a PathMetadata, 
including S3AFileStatus, or  you don't.   There are some spots where it would 
be convenient to just record "this path exists, but we don't have metadata 
yet", (e.g. create() -> OutputStream.close() -> S3AFileSystem.writeFinished().. 
at that point I don't have a FileStatus.), but that would complicate 
S3AFileSystem logic.  We'll see.



was (Author: fabbri):
Minor status update, since this JIRA has a long gestation period. I'm working 
on this now.  So far I have code for:

- New config values: {{fs.s3a.metadatastore.authoratitive}}, and 
{{fs.s3a.metadatastore.impl}}.
- getFileStatus()
- listStatus()
- rename()
- delete()
- mkdirs()
- copyFromLocalFile()
- copyFile()

What remains for this jira:
- create().  Figuring out the OutputStream plumbing now 
- More testing.

What I'd like to do as separate jiras (because I favor smaller code reviews).
- Delete tracking
- Retries (i.e. eventual consistency retry policy).  Would love to see this in 
isolation since it is non-trivial.

I'm inserting TODO comments as I go at key locations for those two items.

Interesting things about my approach so far:

I'm trying to minimize changes to {{S3AFileSystem}}
   - diff stat so far: {quote}
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
   | 116 ++--
{quote}
   - I introduce a "metadatastore s3a helper/glue" glass S3Guard which is a 
bunch of static helper functions, so far.
   - I introduce {{NullMetadataStore}} which is a no-op metadata store.   Goal 
was to simplify S3AFileSystem changes (always call MetadataStore, don't care if 
it is no-op), but I also like that it further clarifies {{MetadataStore}} 
semantics.  Turns out S3AFileSystem still sometimes wants to know if there is 
no MetadataStore to avoid allocating stuff that isn't needed.  Seems like ok 
tradeoff but I'll let folks comment when I post v1 patch.

I'm trying to keep PathMetadata simple:  Either you have a PathMetadata, 
including S3AFileStatus, or  you don't.   There are some spots where it would 
be convenient to just record "this path exists, but we don't have metadata 
yet", (e.g. create() -> OutputStream.close() -> S3AFileSystem.writeFinished().. 
at that point I don't have a FileStatus.), but that would complicate 
S3AFileSystem logic.  We'll see.


> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, 

[jira] [Commented] (HADOOP-13481) User end documents for Aliyun OSS FileSystem

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563795#comment-15563795
 ] 

Hudson commented on HADOOP-13481:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13481. User documents for Aliyun OSS FileSystem. Contributed by 
(mingfei.shi: rev e671a0f52b5488b8453e1a3258ea5e6477995648)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (add) 
hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md


> User end documents for Aliyun OSS FileSystem
> 
>
> Key: HADOOP-13481
> URL: https://issues.apache.org/jira/browse/HADOOP-13481
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13481-HADOOP-12756.001.patch, 
> HADOOP-13481-HADOOP-12756.002.patch, HADOOP-13481-HADOOP-12756.003.patch, 
> HADOOP-13481-HADOOP-12756.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13591) Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563796#comment-15563796
 ] 

Hudson commented on HADOOP-13591:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13591. Unit test failure in TestOSSContractGetFileStatus and (kai.zheng: 
rev 08b37603d9c0be67c4e0790c1ad266551ef21f5e)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml


> Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir 
> -
>
> Key: HADOOP-13591
> URL: https://issues.apache.org/jira/browse/HADOOP-13591
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13591-HADOOP-12756.001.patch, 
> HADOOP-13591-HADOOP-12756.002.patch, HADOOP-13591-HADOOP-12756.003.patch, 
> HADOOP-13591-HADOOP-12756.004.patch, HADOOP-13591-HADOOP-12756.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13483) file-create should throw error rather than overwrite directories

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563790#comment-15563790
 ] 

Hudson commented on HADOOP-13483:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13483. File create should throw error rather than overwrite 
(mingfei.shi: rev bd2d97adeea55bf2c7e4ab475bcc90f3a14e751a)
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractCreate.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java


> file-create should throw error rather than overwrite directories
> 
>
> Key: HADOOP-13483
> URL: https://issues.apache.org/jira/browse/HADOOP-13483
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13483-HADOOP-12756.002.patch, 
> HADOOP-13483-HADOOP-12756.003.patch, HADOOP-13483-HADOOP-12756.004.patch, 
> HADOOP-13483-HADOOP-12756.005.patch, HADOOP-13483.001.patch
>
>
> similar to [HADOOP-13188|https://issues.apache.org/jira/browse/HADOOP-13188]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13491) fix several warnings from findbugs

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563793#comment-15563793
 ] 

Hudson commented on HADOOP-13491:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13491. Fix several warnings from findbugs. Contributed by Genmao 
(mingfei.shi: rev 4d84c814fcaf074022593c057d8f8dec4cd461fa)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java


> fix several warnings from findbugs
> --
>
> Key: HADOOP-13491
> URL: https://issues.apache.org/jira/browse/HADOOP-13491
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13491-HADOOP-12756.001.patch, 
> HADOOP-13491-HADOOP-12756.002.patch, HADOOP-13491-HADOOP-12756.003.patch, 
> HADOOP-13491-HADOOP-12756.004.patch
>
>
> {code:title=Bad practice Warnings|borderStyle=solid}
> Code  Warning
> RRorg.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long) ignores 
> result of java.io.InputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.seek(long)
> Called method java.io.InputStream.skip(long)
> At AliyunOSSInputStream.java:[line 235]
> RR
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject() 
> ignores result of java.io.FileInputStream.skip(long)
> Bug type SR_NOT_CHECKED (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.multipartUploadObject()
> Called method java.io.FileInputStream.skip(long)
> At AliyunOSSOutputStream.java:[line 177]
> RVExceptional return value of java.io.File.delete() ignored in 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Bug type RV_RETURN_VALUE_IGNORED_BAD_PRACTICE (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream
> In method org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream.close()
> Called method java.io.File.delete()
> At AliyunOSSOutputStream.java:[line 116]
> {code}
> {code:title=Multithreaded correctness Warnings|borderStyle=solid}
> Code  Warning
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining; locked 
> 90% of time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.partRemaining
> Synchronized 90% of the time
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Synchronized access at AliyunOSSInputStream.java:[line 106]
> Synchronized access at AliyunOSSInputStream.java:[line 168]
> Synchronized access at AliyunOSSInputStream.java:[line 189]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 188]
> Synchronized access at AliyunOSSInputStream.java:[line 190]
> Synchronized access at AliyunOSSInputStream.java:[line 113]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> Synchronized access at AliyunOSSInputStream.java:[line 131]
> ISInconsistent synchronization of 
> org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position; locked 66% of 
> time
> Bug type IS2_INCONSISTENT_SYNC (click for details) 
> In class org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream
> Field org.apache.hadoop.fs.aliyun.oss.AliyunOSSInputStream.position
> Synchronized 66% of the time
> dUnsynchronized access at AliyunOSSInputStream.java:[line 232]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 234]
> Unsynchronized access at AliyunOSSInputStream.java:[line 235]
> Unsynchronized access at AliyunOSSInputStream.java:[line 236]
> Unsynchronized access at AliyunOSSInputStream.java:[line 245]
> Synchronized access at AliyunOSSInputStream.java:[line 222]
> Synchronized access at AliyunOSSInputStream.java:[line 105]
> Synchronized access at AliyunOSSInputStream.java:[line 167]
> Synchronized access at AliyunOSSInputStream.java:[line 169]
> Synchronized access at AliyunOSSInputStream.java:[line 187]
> Synchronized access 

[jira] [Commented] (HADOOP-13610) Clean up AliyunOss integration tests

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563786#comment-15563786
 ] 

Hudson commented on HADOOP-13610:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13610. Clean up AliyunOss integration tests. Contributed by (kai.zheng: 
rev a1940464a498d1e662e5c3843f2d31ce63ec726b)
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractGetFileStatus.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractMkdir.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSInputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractMkdir.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSTemporaryCredentials.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemStore.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSOutputStream.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractSeek.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractDelete.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractDispCp.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRootDir.java
* (add) hadoop-tools/hadoop-aliyun/src/test/resources/contract/aliyun-oss.xml
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSTemporaryCredentials.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/OSSContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractRename.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSOutputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractOpen.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/OSSTestUtils.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractRootDir.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractSeek.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractCreate.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractCreate.java
* (delete) hadoop-tools/hadoop-aliyun/src/test/resources/contract/oss.xml
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDelete.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDispCp.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/AliyunOSSContract.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractOpen.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractGetFileStatus.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractRename.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSTestUtils.java


> Clean up AliyunOss integration tests
> 
>
> Key: HADOOP-13610
> URL: https://issues.apache.org/jira/browse/HADOOP-13610
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13610-HADOOP-12756.001.patch
>
>
> Noticed some clean up can be done to the tests, major following some 
> conventions like done for others (Azure). 

[jira] [Assigned] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-13309:
--

Assignee: Chris Nauroth

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13309:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-11694

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13499) Support session credentials for authenticating with Aliyun

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563785#comment-15563785
 ] 

Hudson commented on HADOOP-13499:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13499. Support session credentials for authenticating with (mingfei.shi: 
rev 6bb741b9f811d3a1c0ce4ecc91a78ac47513bb8e)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/TemporaryAliyunCredentialsProvider.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSTemporaryCredentials.java


> Support session credentials for authenticating with Aliyun
> --
>
> Key: HADOOP-13499
> URL: https://issues.apache.org/jira/browse/HADOOP-13499
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>Priority: Minor
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13499-HADOOP-12756.001.patch, 
> HADOOP-13499-HADOOP-12756.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563781#comment-15563781
 ] 

Hudson commented on HADOOP-13498:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13498. The number of multi-part upload part should not bigger 
(mingfei.shi: rev cdb77110e77b70ed0c1125b2a6a422a8c7c28ec7)
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSOutputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java


> the number of multi-part upload part should not bigger than 1
> -
>
> Key: HADOOP-13498
> URL: https://issues.apache.org/jira/browse/HADOOP-13498
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13498-HADOOP-12756.001.patch, 
> HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch, 
> HADOOP-13498-HADOOP-12756.004.patch
>
>
> We should not only throw exception when exceed 1 limit of multi-part 
> number, but should guarantee to upload any object no matter how big it is. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13309) Document S3A known limitations in file ownership and permission model.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13309:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-13204)

> Document S3A known limitations in file ownership and permission model.
> --
>
> Key: HADOOP-13309
> URL: https://issues.apache.org/jira/browse/HADOOP-13309
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Chris Nauroth
>Priority: Minor
>
> S3A does not match the implementation of HDFS in its handling of file 
> ownership and permissions.  Fundamental S3 limitations prevent it.  This is a 
> frequent source of confusion for end users.  This issue proposes to document 
> these known limitations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13651) S3Guard: S3AFileSystem Integration with MetadataStore

2016-10-10 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563782#comment-15563782
 ] 

Aaron Fabbri commented on HADOOP-13651:
---

Minor status update, since this JIRA has a long gestation period. I'm working 
on this now.  So far I have code for:

- New config values: {{fs.s3a.metadatastore.authoratitive}}, and 
{{fs.s3a.metadatastore.impl}}.
- getFileStatus()
- listStatus()
- rename()
- delete()
- mkdirs()
- copyFromLocalFile()
- copyFile()

What remains for this jira:
- create().  Figuring out the OutputStream plumbing now 
- More testing.

What I'd like to do as separate jiras (because I favor smaller code reviews).
- Delete tracking
- Retries (i.e. eventual consistency retry policy).  Would love to see this in 
isolation since it is non-trivial.

I'm inserting TODO comments as I go at key locations for those two items.

Interesting things about my approach so far:

I'm trying to minimize changes to {{S3AFileSystem}}
   - diff stat so far: {quote}
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
   | 116 ++--
{quote}
   - I introduce a "metadatastore s3a helper/glue" glass S3Guard which is a 
bunch of static helper functions, so far.
   - I introduce {{NullMetadataStore}} which is a no-op metadata store.   Goal 
was to simplify S3AFileSystem changes (always call MetadataStore, don't care if 
it is no-op), but I also like that it further clarifies {{MetadataStore}} 
semantics.  Turns out S3AFileSystem still sometimes wants to know if there is 
no MetadataStore to avoid allocating stuff that isn't needed.  Seems like ok 
tradeoff but I'll let folks comment when I post v1 patch.

I'm trying to keep PathMetadata simple:  Either you have a PathMetadata, 
including S3AFileStatus, or  you don't.   There are some spots where it would 
be convenient to just record "this path exists, but we don't have metadata 
yet", (e.g. create() -> OutputStream.close() -> S3AFileSystem.writeFinished().. 
at that point I don't have a FileStatus.), but that would complicate 
S3AFileSystem logic.  We'll see.


> S3Guard: S3AFileSystem Integration with MetadataStore
> -
>
> Key: HADOOP-13651
> URL: https://issues.apache.org/jira/browse/HADOOP-13651
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>
> Modify S3AFileSystem et al. to optionally use a MetadataStore for metadata 
> consistency and caching.
> Implementation should have minimal overhead when no MetadataStore is 
> configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563778#comment-15563778
 ] 

Hudson commented on HADOOP-12756:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-12756. Incorporate Aliyun OSS file system implementation. (mingfei.shi: 
rev a5d5342228050a778b20e95adf7885bdba39985d)
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractDelete.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractRename.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSOutputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/OSSContract.java
* (edit) hadoop-tools/pom.xml
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSInputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSUtils.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/package-info.java
* (add) hadoop-tools/hadoop-aliyun/src/test/resources/contract/oss.xml
* (add) hadoop-tools/hadoop-aliyun/pom.xml
* (edit) hadoop-project/pom.xml
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/Constants.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractMkdir.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSOutputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestOSSFileSystemContract.java
* (edit) hadoop-tools/hadoop-tools-dist/pom.xml
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractSeek.java
* (add) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSInputStream.java
* (add) hadoop-tools/hadoop-aliyun/src/test/resources/core-site.xml
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractOpen.java
* (edit) .gitignore
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestOSSContractCreate.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/OSSTestUtils.java
* (add) hadoop-tools/hadoop-aliyun/src/test/resources/log4j.properties
* (add) hadoop-tools/hadoop-aliyun/dev-support/findbugs-exclude.xml


> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756, 3.0.0-alpha2
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13701) AbstractContractRootDirectoryTest can fail when handling delete "/"

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563775#comment-15563775
 ] 

Hudson commented on HADOOP-13701:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13701. AbstractContractRootDirectoryTest can fail when handling 
(kai.zheng: rev c31b5e61b1f09949548116309218a2b3e9c0beda)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java


> AbstractContractRootDirectoryTest can fail when handling delete "/"
> ---
>
> Key: HADOOP-13701
> URL: https://issues.apache.org/jira/browse/HADOOP-13701
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13701-HADOOP-12756.001.patch, 
> HADOOP-13701-HADOOP-12756.002.patch, HADOOP-13701-HADOOP-12756.003.patch
>
>
> AbstractContractRootDirectoryTest in hadoop-aliyun failed with patch in 
> HADOOP-12977. aliyun-oss needs also to support {{rm -rf "/"}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13634) Some configuration in Aliyun doc has been outdated

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563771#comment-15563771
 ] 

Hudson commented on HADOOP-13634:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13634. Some configuration in doc has been outdated. Contributed 
(kai.zheng: rev 26d5df390cf976dcc1d17fc68d0fed789dc34e84)
* (edit) 
hadoop-tools/hadoop-aliyun/src/site/markdown/tools/hadoop-aliyun/index.md


> Some configuration in Aliyun doc has been outdated
> --
>
> Key: HADOOP-13634
> URL: https://issues.apache.org/jira/browse/HADOOP-13634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13634-HADOOP-12756.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13624) Rename TestAliyunOSSContractDispCp

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563773#comment-15563773
 ] 

Hudson commented on HADOOP-13624:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10583 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10583/])
HADOOP-13624. Rename TestAliyunOSSContractDispCp. Contributed by Genmao 
(kai.zheng: rev 22af6f8db3a44cd51514b4851b99adcfad42751d)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystemStore.java
* (delete) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDispCp.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSInputStream.java
* (add) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/contract/TestAliyunOSSContractDistCp.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemStore.java


> Rename TestAliyunOSSContractDispCp
> --
>
> Key: HADOOP-13624
> URL: https://issues.apache.org/jira/browse/HADOOP-13624
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Kai Zheng
>Assignee: Genmao Yu
> Fix For: HADOOP-12756
>
> Attachments: HADOOP-13624-HADOOP-12756.001.patch
>
>
> It should be TestAliyunOSSContractDistCp.java instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563662#comment-15563662
 ] 

Chris Nauroth commented on HADOOP-12774:


Steve, I entered some comments on the pull request.  In addition, Checkstyle 
flagged a few things that can be cleaned up.  I needed to submit a pre-commit 
run manually at builds.apache.org to get that last report.

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563653#comment-15563653
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/136#discussion_r82690355
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
 ---
@@ -32,18 +32,24 @@
 @InterfaceStability.Evolving
 public class S3AFileStatus extends FileStatus {
   private boolean isEmptyDirectory;
+  private final String owner;
 
   // Directories
-  public S3AFileStatus(boolean isdir, boolean isemptydir, Path path) {
+  public S3AFileStatus(boolean isdir,
+  boolean isemptydir,
+  Path path,
+  String owner) {
 super(0, isdir, 1, 0, 0, path);
 isEmptyDirectory = isemptydir;
+this.owner = owner;
   }
 
   // Files
   public S3AFileStatus(long length, long modification_time, Path path,
-  long blockSize) {
+  long blockSize, String owner) {
 super(length, false, 1, blockSize, modification_time, path);
 isEmptyDirectory = false;
+this.owner = owner;
--- End diff --

Similar to the earlier comment:

this.setOwner(owner);
this.setGroup(owner);



> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563644#comment-15563644
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/136#discussion_r82688916
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
 ---
@@ -52,7 +58,16 @@ public boolean isEmptyDirectory() {
 
   @Override
   public String getOwner() {
-return System.getProperty("user.name");
+return owner;
+  }
+
+  /**
+   * The group of an S3A entry is the same as the owner
+   * @return the owner.
+   */
+  @Override
+  public String getGroup() {
--- End diff --

The override of `getGroup` could be removed too.


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563645#comment-15563645
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/136#discussion_r82688403
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
 ---
@@ -32,18 +32,24 @@
 @InterfaceStability.Evolving
 public class S3AFileStatus extends FileStatus {
   private boolean isEmptyDirectory;
+  private final String owner;
--- End diff --

I think we can achieve this change without adding a member variable in the 
subclass.  (See more specific notes to follow.)  Removing this member variable 
would reduce memory footprint.  Admittedly, the memory cost is probably not 
significant, but as we start thinking about the possibility of caching 
`FileStatus` instances client-side for things like S3Guard, then the 
per-instance memory cost of each `FileStatus` could become more significant.


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563643#comment-15563643
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/136#discussion_r82689328
  
--- Diff: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java
 ---
@@ -42,8 +42,11 @@
 
 import java.io.File;
 import java.net.URI;
+import java.security.PrivilegedAction;
--- End diff --

Unused import?


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563642#comment-15563642
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/136#discussion_r82688875
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
 ---
@@ -52,7 +58,16 @@ public boolean isEmptyDirectory() {
 
   @Override
   public String getOwner() {
--- End diff --

If we implement the above comments, then we can completely remove the 
override of `getOwner` here.


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563646#comment-15563646
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user cnauroth commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/136#discussion_r82688624
  
--- Diff: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileStatus.java
 ---
@@ -32,18 +32,24 @@
 @InterfaceStability.Evolving
 public class S3AFileStatus extends FileStatus {
   private boolean isEmptyDirectory;
+  private final String owner;
 
   // Directories
-  public S3AFileStatus(boolean isdir, boolean isemptydir, Path path) {
+  public S3AFileStatus(boolean isdir,
+  boolean isemptydir,
+  Path path,
+  String owner) {
 super(0, isdir, 1, 0, 0, path);
 isEmptyDirectory = isemptydir;
+this.owner = owner;
--- End diff --

Consider removing the member variable and changing this line of code to:

this.setOwner(owner);
this.setGroup(owner);

(These are `protected` methods in the base class.)


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-10-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12756:
---
Release Note: 
Aliyun OSS is widely used among China’s cloud users and this work implemented a 
new Hadoop compatible filesystem AliyunOSSFileSystem with oss scheme, similar 
to the s3a and azure support.


  was:
Aliyun OSS is widely used among China’s cloud users and this work implemented a 
new Hadoop compatible filesystem {{AliyunOSSFileSystem}} with {{oss}} scheme, 
similar to the s3a and azure support.



> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756, 3.0.0-alpha2
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-10-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12756:
---
Fix Version/s: 3.0.0-alpha2
 Release Note: 
Aliyun OSS is widely used among China’s cloud users and this work implemented a 
new Hadoop compatible filesystem {{AliyunOSSFileSystem}} with {{oss}} scheme, 
similar to the s3a and azure support.


Added a release note.

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756, 3.0.0-alpha2
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13584) hadoop-aliyun: merge HADOOP-12756 branch back

2016-10-10 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HADOOP-13584.

Resolution: Fixed

I have merged the branch to trunk via {{git merge --no-ff}} according to the 
vote result and the discussions in HADOOP-12756

> hadoop-aliyun: merge HADOOP-12756 branch back
> -
>
> Key: HADOOP-13584
> URL: https://issues.apache.org/jira/browse/HADOOP-13584
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: shimingfei
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13584.001.patch, HADOOP-13584.002.patch, 
> HADOOP-13584.003.patch, HADOOP-13584.004.patch, HADOOP-13584.005.patch
>
>
> We have finished a round of improvement over Hadoop-12756 branch, which 
> intends to incorporate Aliyun OSS support in Hadoop. This feature provides 
> basic support for data access to Aliyun OSS from Hadoop applications.
> In the implementation, we follow the style of S3 support in Hadooop. Besides 
> we also provide FileSystem contract test over real Aliyun OSS environment. By 
> simple configuration, it can be enabled/disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563592#comment-15563592
 ] 

Hadoop QA commented on HADOOP-12774:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 3 
new + 9 unchanged - 0 fixed = 12 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-12774 |
| GITHUB PR | https://github.com/apache/hadoop/pull/136 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dd5983efa5c3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / de30f13 |
| Default Java | 1.7.0_111 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_101 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_111 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (HADOOP-12756) Incorporate Aliyun OSS file system implementation

2016-10-10 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563593#comment-15563593
 ] 

Kai Zheng commented on HADOOP-12756:


I have merged the branch to trunk via {{git merge --no-ff}} according to the 
vote result and above discussions. Thanks all for your nice support!

> Incorporate Aliyun OSS file system implementation
> -
>
> Key: HADOOP-12756
> URL: https://issues.apache.org/jira/browse/HADOOP-12756
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: HADOOP-12756
>Reporter: shimingfei
>Assignee: shimingfei
> Fix For: HADOOP-12756
>
> Attachments: Aliyun-OSS-integration-v2.pdf, 
> Aliyun-OSS-integration.pdf, HADOOP-12756-v02.patch, HADOOP-12756.003.patch, 
> HADOOP-12756.004.patch, HADOOP-12756.005.patch, HADOOP-12756.006.patch, 
> HADOOP-12756.007.patch, HADOOP-12756.008.patch, HADOOP-12756.009.patch, 
> HADOOP-12756.010.patch, HCFS User manual.md, OSS integration.pdf
>
>
> Aliyun OSS is widely used among China’s cloud users, but currently it is not 
> easy to access data laid on OSS storage from user’s Hadoop/Spark application, 
> because of no original support for OSS in Hadoop.
> This work aims to integrate Aliyun OSS with Hadoop. By simple configuration, 
> Spark/Hadoop applications can read/write data from OSS without any code 
> change. Narrowing the gap between user’s APP and data storage, like what have 
> been done for S3 in Hadoop 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563589#comment-15563589
 ] 

Hadoop QA commented on HADOOP-13703:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m  
0s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
23s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
29s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 29s{color} 
| {color:red} root-jdk1.7.0_111 with JDK v1.7.0_111 generated 1 new + 949 
unchanged - 1 fixed = 950 total (was 950) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 28s{color} | {color:orange} root: The patch generated 16 new + 47 unchanged 
- 3 fixed = 63 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 62 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
54s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 

[jira] [Updated] (HADOOP-12667) s3a: Support createNonRecursive API

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12667:
---
Release Note: S3A now provides a working implementation of the 
FileSystem#createNonRecursive method.

> s3a: Support createNonRecursive API
> ---
>
> Key: HADOOP-12667
> URL: https://issues.apache.org/jira/browse/HADOOP-12667
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Fix For: 2.8.0
>
> Attachments: HADOOP-12667-branch-2-002.patch, 
> HADOOP-12667-branch-2-003.patch, HADOOP-12667-branch-2-004.patch, 
> HADOOP-12667.001.patch
>
>
> HBase and other clients rely on the createNonRecursive API, which was 
> recently un-deprecated. S3A currently does not support it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563561#comment-15563561
 ] 

Xiaoyu Yao edited comment on HADOOP-13686 at 10/10/16 9:29 PM:
---

Thanks [~Weiwei Yang] for working on this. The patch looks good to me overall. 
Here are some early feedbacks.

1. testMoveEmptyDirToTrash

Can we add a helper method by passing a FileSystem obj as parameter
so that this test can be used to test Trash with not only raw file system but 
also 
other HCFS?

Can we further verify that the only directory under trash is the empty 
directory?

verifyDefaultPolicyIntervalValues
{{FileSystem fs = null;}} can be removed.

2. testTrashPermission
Can we add a helper method by passing a FileSystem obj as parameter
so that this test can be used to test Trash with not only raw file system but 
also 
other HCFS?

3. NIT: Can we use try with resource to simplify the logic?
{code}
try {
} finally {
698   if(fs != null) {
699 fs.close();
700   }
701 }
{code}

4. Let's move AuditableTrashPolicy/AuditableCheckpoints in a separate file for 
reusing with HDFS-10922.

5. NIT: AuditableCheckpoints: can be a static class. But I would suggest we 
declare the members var/methods to be non-static. This can avoid issues when 
running multiple AuditableTrashPolicy instances.




was (Author: xyao):
Thanks [~Weiwei Yang] for working on this. The patch looks good to me overall. 
Here are some early feedbacks.

1. testMoveEmptyDirToTrash

Can we add a helper method by passing a FileSystem obj as parameter
so that this test can be used to test Trash with not only raw file system but 
also 
other HCFS?

Can we further verify that the only directory under trash is the empty 
directory?

verifyDefaultPolicyIntervalValues
{{FileSystem fs = null;}} can be removed.

2. testTrashPermission
Can we add a helper method by passing a FileSystem obj as parameter
so that this test can be used to test Trash with not only raw file system but 
also 
other HCFS?

3. NIT: Can we use try with resource to simplify the logic?
{code}
try {
} finally {
698   if(fs != null) {
699 fs.close();
700   }
701 }
{code}


4. NIT: AuditableCheckpoints: can be a static inner class. But I would suggest 
we 
declare the members var/methods to be non-static. This can avoid issues when 
running multiple AuditableTrashPolicy instances.

> Adding additional unit test for Trash (I)
> -
>
> Key: HADOOP-13686
> URL: https://issues.apache.org/jira/browse/HADOOP-13686
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HADOOP-13686.01.patch
>
>
> This ticket is opened to track adding the forllowing unit test in 
> hadoop-common. 
> #test users can delete their own trash directory
> #test users can delete an empty directory and the directory is moved to trash
> #test fs.trash.interval with invalid values such as 0 or negative



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13252) Tune S3A provider plugin mechanism

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13252:
---
Release Note: S3A now supports configuration of multiple credential 
provider classes for authenticating to S3.  These are loaded and queried in 
sequence for a valid set of credentials.  For more details, refer to the 
description of the fs.s3a.aws.credentials.provider configuration property or 
the S3A documentation page.

> Tune S3A provider plugin mechanism
> --
>
> Key: HADOOP-13252
> URL: https://issues.apache.org/jira/browse/HADOOP-13252
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13252-006.patch, HADOOP-13252-007.patch, 
> HADOOP-13252-branch-2-001.patch, HADOOP-13252-branch-2-003.patch, 
> HADOOP-13252-branch-2-004.patch, HADOOP-13252-branch-2-005.patch
>
>
> We've now got some fairly complex auth mechanisms going on: -hadoop config, 
> KMS, env vars, "none". IF something isn't working, it's going to be a lot 
> harder to debug.
> Review and tune the S3A provider point
> * add logging of what's going on in s3 auth to help debug problems
> * make a whole chain of logins expressible
> * allow the anonymous credentials to be included in the list
> * review and updated documents.
> I propose *carefully* adding some debug messages to identify which auth 
> provider is doing the auth, so we can see if the env vars were kicking in, 
> sysprops, etc.
> What we mustn't do is leak any secrets: this should be identifying whether 
> properties and env vars are set, not what their values are. I don't believe 
> that this will generate a security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12774:
---
Assignee: Steve Loughran  (was: Chris Nauroth)

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth reassigned HADOOP-12774:
--

Assignee: Chris Nauroth  (was: Steve Loughran)

> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Chris Nauroth
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13447) Refactor S3AFileSystem to support introduction of separate metadata repository and tests.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13447:
---
Target Version/s: 2.8.0  (was: 2.9.0)
   Fix Version/s: (was: 2.9.0)
  2.8.0

I cherry-picked this to branch-2.8.

> Refactor S3AFileSystem to support introduction of separate metadata 
> repository and tests.
> -
>
> Key: HADOOP-13447
> URL: https://issues.apache.org/jira/browse/HADOOP-13447
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13447-HADOOP-13446.001.patch, 
> HADOOP-13447-HADOOP-13446.002.patch, HADOOP-13447.003.patch, 
> HADOOP-13447.004.patch, HADOOP-13447.005.patch
>
>
> The scope of this issue is to refactor the existing {{S3AFileSystem}} into 
> multiple coordinating classes.  The goal of this refactoring is to separate 
> the {{FileSystem}} API binding from the AWS SDK integration, make code 
> maintenance easier while we're making changes for S3Guard, and make it easier 
> to mock some implementation details so that tests can simulate eventual 
> consistency behavior in a deterministic way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13686) Adding additional unit test for Trash (I)

2016-10-10 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563561#comment-15563561
 ] 

Xiaoyu Yao commented on HADOOP-13686:
-

Thanks [~Weiwei Yang] for working on this. The patch looks good to me overall. 
Here are some early feedbacks.

1. testMoveEmptyDirToTrash

Can we add a helper method by passing a FileSystem obj as parameter
so that this test can be used to test Trash with not only raw file system but 
also 
other HCFS?

Can we further verify that the only directory under trash is the empty 
directory?

verifyDefaultPolicyIntervalValues
{{FileSystem fs = null;}} can be removed.

2. testTrashPermission
Can we add a helper method by passing a FileSystem obj as parameter
so that this test can be used to test Trash with not only raw file system but 
also 
other HCFS?

3. NIT: Can we use try with resource to simplify the logic?
{code}
try {
} finally {
698   if(fs != null) {
699 fs.close();
700   }
701 }
{code}


4. NIT: AuditableCheckpoints: can be a static inner class. But I would suggest 
we 
declare the members var/methods to be non-static. This can avoid issues when 
running multiple AuditableTrashPolicy instances.

> Adding additional unit test for Trash (I)
> -
>
> Key: HADOOP-13686
> URL: https://issues.apache.org/jira/browse/HADOOP-13686
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
> Attachments: HADOOP-13686.01.patch
>
>
> This ticket is opened to track adding the forllowing unit test in 
> hadoop-common. 
> #test users can delete their own trash directory
> #test users can delete an empty directory and the directory is moved to trash
> #test fs.trash.interval with invalid values such as 0 or negative



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-10-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563553#comment-15563553
 ] 

Chris Nauroth edited comment on HADOOP-13208 at 10/10/16 9:22 PM:
--

I cherry-picked this to branch-2.8 for inclusion in 2.8.0.


was (Author: cnauroth):
I cherry-picked this to branch-2.8 for inclusion 2.8.0.

> S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the 
> pseudo-tree of directories
> 
>
> Key: HADOOP-13208
> URL: https://issues.apache.org/jira/browse/HADOOP-13208
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13208-branch-2-001.patch, 
> HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, 
> HADOOP-13208-branch-2-009.patch, HADOOP-13208-branch-2-010.patch, 
> HADOOP-13208-branch-2-011.patch, HADOOP-13208-branch-2-012.patch, 
> HADOOP-13208-branch-2-017.patch, HADOOP-13208-branch-2-018.patch, 
> HADOOP-13208-branch-2-019.patch, HADOOP-13208-branch-2-020.patch, 
> HADOOP-13208-branch-2-021.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A major cost in split calculation against object stores turns out be listing 
> the directory tree itself. That's because against S3, it takes S3A two HEADs 
> and two lists to list the content of any directory path (2 HEADs + 1 list for 
> getFileStatus(); the next list to query the contents).
> Listing a directory could be improved slightly by combining the final two 
> listings. However, a listing of a directory tree will still be 
> O(directories). In contrast, a recursive {{listFiles()}} operation should be 
> implementable by a bulk listing of all descendant paths; one List operation 
> per thousand descendants. 
> As the result of this call is an iterator, the ongoing listing can be 
> implemented within the iterator itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13446:
---
Target Version/s: 2.8.0  (was: 2.9.0)
   Fix Version/s: (was: 2.9.0)
  2.8.0

I cherry-picked this to branch-2.8.

> Support running isolated unit tests separate from AWS integration tests.
> 
>
> Key: HADOOP-13446
> URL: https://issues.apache.org/jira/browse/HADOOP-13446
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13446-HADOOP-13345.001.patch, 
> HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, 
> HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, 
> HADOOP-13446.005.patch, HADOOP-13446.006.patch
>
>
> Currently, the hadoop-aws module only runs Surefire if AWS credentials have 
> been configured.  This implies that all tests must run integrated with the 
> AWS back-end.  It also means that no tests run as part of ASF pre-commit.  
> This issue proposes for the hadoop-aws module to support running isolated 
> unit tests without integrating with AWS.  This will benefit S3Guard, because 
> we expect the need for isolated mock-based testing to simulate eventual 
> consistency behavior.  It also benefits hadoop-aws in general by allowing 
> pre-commit to do something more valuable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-10-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13208:
---
Affects Version/s: (was: 2.8.0)
 Target Version/s: 2.8.0  (was: 2.9.0)
Fix Version/s: (was: 2.9.0)
   2.8.0

I cherry-picked this to branch-2.8 for inclusion 2.8.0.

> S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the 
> pseudo-tree of directories
> 
>
> Key: HADOOP-13208
> URL: https://issues.apache.org/jira/browse/HADOOP-13208
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13208-branch-2-001.patch, 
> HADOOP-13208-branch-2-007.patch, HADOOP-13208-branch-2-008.patch, 
> HADOOP-13208-branch-2-009.patch, HADOOP-13208-branch-2-010.patch, 
> HADOOP-13208-branch-2-011.patch, HADOOP-13208-branch-2-012.patch, 
> HADOOP-13208-branch-2-017.patch, HADOOP-13208-branch-2-018.patch, 
> HADOOP-13208-branch-2-019.patch, HADOOP-13208-branch-2-020.patch, 
> HADOOP-13208-branch-2-021.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A major cost in split calculation against object stores turns out be listing 
> the directory tree itself. That's because against S3, it takes S3A two HEADs 
> and two lists to list the content of any directory path (2 HEADs + 1 list for 
> getFileStatus(); the next list to query the contents).
> Listing a directory could be improved slightly by combining the final two 
> listings. However, a listing of a directory tree will still be 
> O(directories). In contrast, a recursive {{listFiles()}} operation should be 
> implementable by a bulk listing of all descendant paths; one List operation 
> per thousand descendants. 
> As the result of this call is an iterator, the ongoing listing can be 
> implemented within the iterator itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13024) Distcp with -delete feature on raw data not implemented

2016-10-10 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563391#comment-15563391
 ] 

Jing Zhao commented on HADOOP-13024:


Thanks for working on this, [~mavinmar...@gmail.com]. The patch looks good to 
me. Just two minor comments:
# Semantically the new getNonePath method only needs a boolean input to 
indicate if the "/NONE" should be presented as a reserved raw path. Thus it's 
better to change its signature to "Path getNonePath(boolean)". Also, both {{new 
Path("/NONE")}} and {{new Path(DistCpConstants.HDFS_RESERVED_RAW_DIRECTORY_NAME 
+ "/NONE")}} can be defined as static constants so that we can avoid creating 
new objects.
# Is it possible we can add a unit test for this?

> Distcp with -delete feature on raw data not implemented
> ---
>
> Key: HADOOP-13024
> URL: https://issues.apache.org/jira/browse/HADOOP-13024
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Mavin Martin
>Assignee: Mavin Martin
> Attachments: HADOOP-13024.patch, HADOOP-13024.patch, 
> HADOOP-13024.patch.3, HADOOP-13024.patch.4, HADOOP-13024.patch.5
>
>
> When doing distcp of raw data using -delete feature, following bug appears.
> {code}
> [root@xxx bin]# hadoop distcp -delete -update /.reserved/raw/tmp/a 
> /.reserved/raw/tmp/b
> 16/04/14 02:54:01 ERROR tools.DistCp: Exception encountered
> java.io.IOException: DistCp failure: Job job_xxx has failed: Job commit 
> failed: org.apache.hadoop.tools.CopyListing$InvalidInputException: The source 
> path 'hdfs://nn/.reserved/raw/tmp/b' starts with /.reserved/raw but the 
> target path 'hdfs://nn/NONE' does not. Either all or none of the paths must 
> have this prefix.
> at 
> org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:141)
> at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:85)
> at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
> at 
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
> at 
> org.apache.hadoop.tools.mapred.CopyCommitter.deleteMissing(CopyCommitter.java:244)
> at 
> org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:94)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274)
> at 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:187)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {code}
> The issue is not with the distributed copy, the issue is when it tries to 
> delete things in the target that no longer exist in the source, it 
> revalidates to make sure NONE is in the /.reserved/raw domain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563381#comment-15563381
 ] 

Hudson commented on HADOOP-13669:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10580 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10580/])
HADOOP-13669. KMS Server should log exceptions before throwing. (xiao: rev 
65912e4027548868ebefd8ee36eb00fa889704a7)
* (edit) 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMS.java


> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10225) Publish Maven javadoc and sources artifacts with Hadoop releases.

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563324#comment-15563324
 ] 

ASF GitHub Bot commented on HADOOP-10225:
-

Github user lewismc commented on the issue:

https://github.com/apache/hadoop/pull/137
  
Just squashed all the commits so it is much cleaner.


> Publish Maven javadoc and sources artifacts with Hadoop releases.
> -
>
> Key: HADOOP-10225
> URL: https://issues.apache.org/jira/browse/HADOOP-10225
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Lewis John McGibbney
>Assignee: Lewis John McGibbney
>  Labels: hadoop, javadoc, maven, sources
> Attachments: HADOOP-10225.patch
>
>
> Right now Maven javadoc and sources artifacts do not accompany Hadoop 
> releases within Maven central. This means that one needs to checkout source 
> code to DEBUG aspects of the codebase... this is not user friendly.
> The build script(s) should be amended to accommodate publication of javadoc 
> and sources artifacts alongside pom and jar artifacts. 
> Some history on this conversation can be seen below
> http://s.apache.org/7qR



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13699) Configuration does not substitute multiple references to the same var

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563320#comment-15563320
 ] 

Hudson commented on HADOOP-13699:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10579 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10579/])
HADOOP-13699. Configuration does not substitute multiple references to (wang: 
rev 03060075c53a2cecfbf5f60b6fc77afecf64ace5)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfiguration.java


> Configuration does not substitute multiple references to the same var
> -
>
> Key: HADOOP-13699
> URL: https://issues.apache.org/jira/browse/HADOOP-13699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13699.001.patch
>
>
> Config var loop detection was originally introduced by HADOOP-6871. Due to 
> cycle detection changes in the trunk patch for HADOOP-11506, resolution for 
> multiple references to the same variable no longer resolved, e.g.
> {noformat}
> somekey = "${otherkey} ${otherkey}"
> {noformat}
> This loop detection business is fragile, expensive, and not in branch-2, so 
> let's reduce it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10225) Publish Maven javadoc and sources artifacts with Hadoop releases.

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563311#comment-15563311
 ] 

ASF GitHub Bot commented on HADOOP-10225:
-

Github user lewismc commented on the issue:

https://github.com/apache/hadoop/pull/137
  
@umbrant CC


> Publish Maven javadoc and sources artifacts with Hadoop releases.
> -
>
> Key: HADOOP-10225
> URL: https://issues.apache.org/jira/browse/HADOOP-10225
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Lewis John McGibbney
>Assignee: Lewis John McGibbney
>  Labels: hadoop, javadoc, maven, sources
> Attachments: HADOOP-10225.patch
>
>
> Right now Maven javadoc and sources artifacts do not accompany Hadoop 
> releases within Maven central. This means that one needs to checkout source 
> code to DEBUG aspects of the codebase... this is not user friendly.
> The build script(s) should be amended to accommodate publication of javadoc 
> and sources artifacts alongside pom and jar artifacts. 
> Some history on this conversation can be seen below
> http://s.apache.org/7qR



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10225) Publish Maven javadoc and sources artifacts with Hadoop releases.

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563310#comment-15563310
 ] 

ASF GitHub Bot commented on HADOOP-10225:
-

GitHub user lewismc opened a pull request:

https://github.com/apache/hadoop/pull/137

HADOOP-10225

Hi Folks,
This PR is an attempt to address 
https://issues.apache.org/jira/browse/HADOOP-10225.
The patch can be tested by running
```
mvn release:clean release:prepare -DautoVersionSubmodules=true -DdryRun=true
```
Please let me know once it has been tested and I can squash commits into 
one.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/lewismc/hadoop HADOOP-10225

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/137.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #137


commit 66c8970c1bf85cd273680a8518278183937ae1f8
Author: Lewis John McGibbney 
Date:   2016-10-10T18:24:51Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 239a155445b06e36f513c62e9639ebe9b75e57c1
Author: Lewis John McGibbney 
Date:   2016-10-10T18:35:00Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 6ff407a3c2177a0cd49c8127e3dc5856e9bf7969
Author: Lewis John McGibbney 
Date:   2016-10-10T18:39:12Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 793e3bcaa2ae53a89727abdd1b27ad12f5b83e25
Author: Lewis John McGibbney 
Date:   2016-10-10T18:40:18Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit fa1c8fe97b7984eaa9e31d585302d0bbd9ae76c1
Author: Lewis John McGibbney 
Date:   2016-10-10T18:45:10Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 116c0e9f5e226d91ad6c454e6440d4531f475ed1
Author: Lewis John McGibbney 
Date:   2016-10-10T18:47:11Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 5b61fe48c873c09534741d0651f9d50a5fce8282
Author: Lewis John McGibbney 
Date:   2016-10-10T19:36:29Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 8861e0c7998806dcb25ddd35292fc2ea62bce866
Author: Lewis John McGibbney 
Date:   2016-10-10T19:38:30Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 266c349edd4ba7e1cf5260037514d73f376f1ffe
Author: Lewis John McGibbney 
Date:   2016-10-10T19:42:16Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 70108252fa3c683cd050ccd1d32bc4d59a453d21
Author: Lewis John McGibbney 
Date:   2016-10-10T19:44:46Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 9aef393538496c935a62564ee3677101be06c32e
Author: Lewis John McGibbney 
Date:   2016-10-10T19:49:48Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 03fc979b9d93d9b0100c9e5c90218284de3cbaae
Author: Lewis John McGibbney 
Date:   2016-10-10T19:51:57Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.

commit 9f031f063ccf50cbd2cdc6fc77fec3b631170c07
Author: Lewis John McGibbney 
Date:   2016-10-10T19:53:30Z

HADOOP-10225 Publish Maven javadoc and sources artifacts with Hadoop 
releases.




> Publish Maven javadoc and sources artifacts with Hadoop releases.
> -
>
> Key: HADOOP-10225
> URL: https://issues.apache.org/jira/browse/HADOOP-10225
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Lewis John McGibbney
>Assignee: Lewis John McGibbney
>  Labels: hadoop, javadoc, maven, sources
> Attachments: HADOOP-10225.patch
>
>
> Right now Maven javadoc and sources artifacts do not accompany Hadoop 
> releases within Maven central. This means that one needs to checkout source 
> code to DEBUG aspects of the codebase... this is not user friendly.
> The build script(s) should be amended to accommodate publication of javadoc 
> and sources artifacts alongside pom and jar artifacts. 
> Some history on this conversation can be seen below
> http://s.apache.org/7qR



--
This 

[jira] [Updated] (HADOOP-13669) KMS Server should log exceptions before throwing

2016-10-10 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13669:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2 and branch-2.8.
Thanks [~sacharya] for the contribution!

> KMS Server should log exceptions before throwing
> 
>
> Key: HADOOP-13669
> URL: https://issues.apache.org/jira/browse/HADOOP-13669
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Reporter: Xiao Chen
>Assignee: Suraj Acharya
>  Labels: supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13369.2.patch, HADOOP-13369.patch, 
> HADOOP-13369.patch.1
>
>
> In some recent investigation, it turns out when KMS throws an exception (into 
> tomcat), it's not logged anywhere and we can only see the exception message 
> from client-side, but not the stacktrace. Logging the stacktrance would help 
> debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13081) add the ability to create multiple UGIs/subjects from one kerberos login

2016-10-10 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563221#comment-15563221
 ] 

Sergey Shelukhin commented on HADOOP-13081:
---

Sorry, I've posted this in the wrong JIRA apparently:

The scenario is like this; we accept work on behalf of clients that is, 
generally speaking, authorized on a higher level (those are fragments of Hive 
jobs right now, except unlike MR they all run in-process, and we are also 
making the external client which is the crux of the matter). In normal case, 
the service doing the auth (HiveServer2 in case of Hive) gathers the tokens and 
passes them on to the service running the fragment; the external client may 
supply some tokens too. However, apparently for some clients it's difficult (or 
not implemented yet) to gather tokens, so in the cases of perimeter security, 
we want to be able to configure access in such way that they can access all of 
HDFS (for example; it could be some other service that their code touched that 
we have no idea about, hypothetically). The reasoning is that if the work item 
has passed thru the authorization that our service does, they don't care about 
HDFS security any more. In that case, our service would log in from keytab and 
run their item in that context. However, we neither want to require a 
super-user that is able to access all possible services (e.g. HBase), nor 
disable HDFS security altogether. So, the user work items would access HDFS (or 
HBase or whatever) as a user with lots of access, by design, and access other 
services via tokens.
This feature is off by default, obviously, and the of their code talking to 
services is based entirely on tokens by default.
I understand running as such user is not an ideal situation but it is 
apparently a valid scenario for some cases.
So, what we do now is create a master UGI/Subject; for every task, if this is 
enabled, we clone that via reflection and add the tokens. We haven't 
extensively tested this yet since external client is not production ready but 
it appears to work in some tests.

I hope this makes sense, feel free to clarify.
We are using reflection to get the subject and construct the UGI from subject.


> add the ability to create multiple UGIs/subjects from one kerberos login
> 
>
> Key: HADOOP-13081
> URL: https://issues.apache.org/jira/browse/HADOOP-13081
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13081.01.patch, HADOOP-13081.02.patch, 
> HADOOP-13081.02.patch, HADOOP-13081.03.patch, HADOOP-13081.03.patch, 
> HADOOP-13081.patch
>
>
> We have a scenario where we log in with kerberos as a certain user for some 
> tasks, but also want to add tokens to the resulting UGI that would be 
> specific to each task. We don't want to authenticate with kerberos for every 
> task.
> I am not sure how this can be accomplished with the existing UGI interface. 
> Perhaps some clone method would be helpful, similar to createProxyUser minus 
> the proxy stuff; or it could just relogin anew from ticket cache. 
> getUGIFromTicketCache seems like the best option in existing code, but there 
> doesn't appear to be a consistent way of handling ticket cache location - the 
> above method, that I only see called in test, is using a config setting that 
> is not used anywhere else, and the env variable for the location that is used 
> in the main ticket cache related methods is not set uniformly on all paths - 
> therefore, trying to find the correct ticket cache and passing it via the 
> config setting to getUGIFromTicketCache seems even hackier than doing the 
> clone via reflection ;) Moreover, getUGIFromTicketCache ignores the user 
> parameter on the main path - it logs a warning for multiple principals and 
> then logs in with first available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13066) UserGroupInformation.loginWithKerberos/getLoginUser is not thread-safe

2016-10-10 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HADOOP-13066:
--
Comment: was deleted

(was: The scenario is like this; we accept work on behalf of clients that is, 
generally speaking, authorized on a higher level (those are fragments of Hive 
jobs right now, except unlike MR they all run in-process, and we are also 
making the external client which is the crux of the matter). In normal case, 
the service doing the auth (HiveServer2 in case of Hive) gathers the tokens and 
passes them on to the service running the fragment; the external client may 
supply some tokens too. However, apparently for some clients it's difficult (or 
not implemented yet) to gather tokens, so in the cases of perimeter security, 
we want to be able to configure access in such way that they can access all of 
HDFS (for example; it could be some other service that their code touched that 
we have no idea about, hypothetically). The reasoning is that if the work item 
has passed thru the authorization that our service does, they don't care about 
HDFS security any more. In that case, our service would log in from keytab and 
run their item in that context. However, we neither want to require a 
super-user that is able to access all possible services (e.g. HBase), nor 
disable HDFS security altogether. So, the user work items would access HDFS (or 
HBase or whatever) as a user with lots of access, by design, and access other 
services via tokens.
This feature is off by default, obviously, and the of their code talking to 
services is based entirely on tokens by default.
I understand running as such user is not an ideal situation but it is 
apparently a valid scenario for some cases.
So, what we do now is create a master UGI/Subject; for every task, if this is 
enabled, we clone that via reflection and add the tokens. We haven't 
extensively tested this yet since external client is not production ready but 
it appears to work in some tests.

I hope this makes sense, feel free to clarify.
We are using reflection to get the subject and construct the UGI from subject.)

> UserGroupInformation.loginWithKerberos/getLoginUser is not thread-safe
> --
>
> Key: HADOOP-13066
> URL: https://issues.apache.org/jira/browse/HADOOP-13066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Sergey Shelukhin
>
> When calling loginFromKerberos, a static variable is set up with the result. 
> If someone logs in as a different user from a different thread, the call to 
> getLoginUser will not return the correct UGI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13066) UserGroupInformation.loginWithKerberos/getLoginUser is not thread-safe

2016-10-10 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563218#comment-15563218
 ] 

Sergey Shelukhin commented on HADOOP-13066:
---

Sorry, I commented on the wrong jira

> UserGroupInformation.loginWithKerberos/getLoginUser is not thread-safe
> --
>
> Key: HADOOP-13066
> URL: https://issues.apache.org/jira/browse/HADOOP-13066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Sergey Shelukhin
>
> When calling loginFromKerberos, a static variable is set up with the result. 
> If someone logs in as a different user from a different thread, the call to 
> getLoginUser will not return the correct UGI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13600) S3a rename() to copy files in a directory in parallel

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-13600:
---

Assignee: Steve Loughran

> S3a rename() to copy files in a directory in parallel
> -
>
> Key: HADOOP-13600
> URL: https://issues.apache.org/jira/browse/HADOOP-13600
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563206#comment-15563206
 ] 

Steve Loughran commented on HADOOP-13700:
-


> 2.8.0 anniversary

yeah, might be time to fork off 2.9 and release that

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13699) Configuration does not substitute multiple references to the same var

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-13699.
--
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2

Committed to trunk, thanks for reviewing Xiao!

> Configuration does not substitute multiple references to the same var
> -
>
> Key: HADOOP-13699
> URL: https://issues.apache.org/jira/browse/HADOOP-13699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13699.001.patch
>
>
> Config var loop detection was originally introduced by HADOOP-6871. Due to 
> cycle detection changes in the trunk patch for HADOOP-11506, resolution for 
> multiple references to the same variable no longer resolved, e.g.
> {noformat}
> somekey = "${otherkey} ${otherkey}"
> {noformat}
> This loop detection business is fragile, expensive, and not in branch-2, so 
> let's reduce it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13699) Configuration does not substitute multiple references to the same var

2016-10-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13699:
-
Release Note: This changes the config var cycle detection introduced in 
3.0.0-alpha1 by HADOOP-6871 such that it detects single-variable but not 
multi-variable loops. This also fixes resolution of multiple specifications of 
the same variable in a config value.

> Configuration does not substitute multiple references to the same var
> -
>
> Key: HADOOP-13699
> URL: https://issues.apache.org/jira/browse/HADOOP-13699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HADOOP-13699.001.patch
>
>
> Config var loop detection was originally introduced by HADOOP-6871. Due to 
> cycle detection changes in the trunk patch for HADOOP-11506, resolution for 
> multiple references to the same variable no longer resolved, e.g.
> {noformat}
> somekey = "${otherkey} ${otherkey}"
> {noformat}
> This loop detection business is fragile, expensive, and not in branch-2, so 
> let's reduce it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too

2016-10-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563200#comment-15563200
 ] 

Andrew Wang commented on HADOOP-6871:
-

We hit an issue during testing involving multiple specifications of the same 
variable, for which I posted a patch at HADOOP-13699.

This slims down the cycle detection significantly, and doesn't try to find 
multi-variable loops. This is because doing this accurately is a lot more work 
in performance sensitive code, and is overkill for the usecase mentioned here 
of a variable pointing at itself.

Please comment over at HADOOP-13699 if you feel differently, thanks!

> When the value of a configuration key is set to its unresolved form, it 
> causes the IllegalStateException in Configuration.get() stating that 
> substitution depth is too large.
> -
>
> Key: HADOOP-6871
> URL: https://issues.apache.org/jira/browse/HADOOP-6871
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.0.0-alpha1
>Reporter: Arvind Prabhakar
>Assignee: Arvind Prabhakar
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
> HADOOP-6871-3.patch, HADOOP-6871.patch
>
>
> When a configuration value is set to its unresolved expression string, it 
> leads to recursive substitution attempts in 
> {{Configuration.substituteVars(String)}} method until the max substitution 
> check kicks in and raises an IllegalStateException indicating that the 
> substitution depth is too large. For example, the configuration key 
> "{{foobar}}" with a value set to "{{$\{foobar\}}}" will cause this behavior. 
> While this is not a usual use case, it can happen in build environments where 
> a property value is not specified and yet being passed into the test 
> mechanism leading to failures due to this limitation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Status: Patch Available  (was: Open)

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Attachment: HADOOP-13560-branch-2-011.patch

Patch 011; address checkstyle warnings as well as can be done.
* mark some package scoped/inner classes as final
* chop down lines where appropriate
* rename some variables, and even when private final, wrap access from 
subclasses in accessors. (needless, IMO)

Not done, hence checkstyle will still be complaining about. I don't intend to 
address these.
* chop javadoc lines with link/crossref entries > 80 chars. checkstyle is 
mistaken there.
* An "unused" import which is actually used in the javadocs
* use of tests named {{test_040_PositionedReadHugeFile()}}, public void 
{{test_050_readHugeFile()}}  in {{AbstractSTestS3AHugeFiles}}. This class has a 
test runner which runs the tests in alphabetical order; they must run in 
sequence. The naming scheme is designed to achieve this, and to highlight that 
the numbering scheme here is special.
* use of {{ _1MB}} and {{_1KB}}  constants in the test file. They're sizes, I 
like them like that, and it is only a test file.

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-010.patch, 
> HADOOP-13560-branch-2-011.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Status: Open  (was: Patch Available)

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-010.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13702) Add a new instrumented read-write lock

2016-10-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15563114#comment-15563114
 ] 

Xiao Chen commented on HADOOP-13702:


bq. This follows guidance for System.nanoTime.
Thanks [~chris.douglas] for the good catch!

Also, seems the fix is moved back to HDFS, we should make sure the component is 
consistent with the final change before check-in.
[~jingcheng...@intel.com] thanks for revving. Why the new 
{{InstrumentedReadLock}} and {{InstrumentedWriteLock}} can't reuse 
{{InstrumentedLock}}?

> Add a new instrumented read-write lock
> --
>
> Key: HADOOP-13702
> URL: https://issues.apache.org/jira/browse/HADOOP-13702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-10924-2.patch, HDFS-10924-3.patch, 
> HDFS-10924-4.patch, HDFS-10924-5.patch, HDFS-10924.patch
>
>
> Add a new instrumented read-write lock in hadoop common, so that the 
> HDFS-9668 can use this to improve the locking in FsDatasetImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13697) LogLevel#main throws exception if no arguments provided

2016-10-10 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562922#comment-15562922
 ] 

Mingliang Liu commented on HADOOP-13697:


[~jojochuang] Would you review/commit the v1 patch? Thanks.

> LogLevel#main throws exception if no arguments provided
> ---
>
> Key: HADOOP-13697
> URL: https://issues.apache.org/jira/browse/HADOOP-13697
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HADOOP-13697.000.patch, HADOOP-13697.001.patch
>
>
> {code}
> root@b9ab37566005:/# hadoop daemonlog
> Usage: General options are:
>   [-getlevel   [-protocol (http|https)]
>   [-setlevel[-protocol (http|https)]
> Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: 
> No arguments specified
>   at org.apache.hadoop.log.LogLevel$CLI.parseArguments(LogLevel.java:138)
>   at org.apache.hadoop.log.LogLevel$CLI.run(LogLevel.java:106)
>   at org.apache.hadoop.log.LogLevel.main(LogLevel.java:70)
> {code}
> I think we can catch the exception in the main method, and dump a log error 
> message instead of throw the stack which may frustrate users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12090) minikdc-related unit tests fail consistently on some platforms

2016-10-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562880#comment-15562880
 ] 

John Zhuge commented on HADOOP-12090:
-

By default Linux automatically adjust the socket buffer size starting from the 
default value, see [tcp(7)|http://man7.org/linux/man-pages/man7/tcp.7.html]. 
The 3 values from {{tcp_rmem}} or {{tcp_wmem}} are: min, default, max.
{noformat}
[jzhuge@jzhuge-ubuntu hadoop2]((8fca972...))$ cat /proc/sys/net/ipv4/tcp_rmem
409687380   6291456
[jzhuge@jzhuge-ubuntu hadoop2]((8fca972...))$ cat /proc/sys/net/ipv4/tcp_wmem
409616384   4194304
[jzhuge@jzhuge-ubuntu hadoop2]((8fca972...))$ cat 
/proc/sys/net/ipv4/tcp_moderate_rcvbuf 
1
{noformat}
Setting {{SO_SNDBUF}} and {{SO_RCVBUF}} will turn off auto adjustment. The max 
for {{SO_RCVBUF}} or {{SO_SNDBUF}} is limited by 
{{/proc/sys/net/core/rmem_max}} or {{/proc/sys/net/core/wmem_max}}.

> minikdc-related unit tests fail consistently on some platforms
> --
>
> Key: HADOOP-12090
> URL: https://issues.apache.org/jira/browse/HADOOP-12090
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12090.001.patch, HADOOP-12090.002.patch
>
>
> On some platforms all unit tests that use minikdc fail consistently. Those 
> tests include TestKMS, TestSaslDataTransfer, 
> TestTimelineAuthenticationFilter, etc.
> Typical failures on the unit tests:
> {noformat}
> java.lang.AssertionError: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Cannot get a 
> KDC reply)
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1154)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS$8$4.run(TestKMS.java:1145)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1645)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.doAs(TestKMS.java:261)
>   at 
> org.apache.hadoop.crypto.key.kms.server.TestKMS.access$100(TestKMS.java:76)
> {noformat}
> The errors that cause this failure on the KDC server on the minikdc are a 
> NullPointerException:
> {noformat}
> org.apache.mina.filter.codec.ProtocolDecoderException: 
> java.lang.NullPointerException: message (Hexdump: ...)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:234)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:48)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:802)
>   at 
> org.apache.mina.core.filterchain.IoFilterAdapter.messageReceived(IoFilterAdapter.java:120)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434)
>   at 
> org.apache.mina.core.filterchain.DefaultIoFilterChain.fireMessageReceived(DefaultIoFilterChain.java:426)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.read(AbstractPollingIoProcessor.java:604)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:564)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.process(AbstractPollingIoProcessor.java:553)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor.access$400(AbstractPollingIoProcessor.java:57)
>   at 
> org.apache.mina.core.polling.AbstractPollingIoProcessor$Processor.run(AbstractPollingIoProcessor.java:892)
>   at 
> org.apache.mina.util.NamePreservingRunnable.run(NamePreservingRunnable.java:65)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException: message
>   at 
> org.apache.mina.filter.codec.AbstractProtocolDecoderOutput.write(AbstractProtocolDecoderOutput.java:44)
>   at 
> org.apache.directory.server.kerberos.protocol.codec.MinaKerberosDecoder.decode(MinaKerberosDecoder.java:65)
>   at 
> org.apache.mina.filter.codec.ProtocolCodecFilter.messageReceived(ProtocolCodecFilter.java:224)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HADOOP-13672) Extract out jackson calls into an overrideable method in DelegationTokenAuthenticationHandler

2016-10-10 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562789#comment-15562789
 ] 

Xiao Chen commented on HADOOP-13672:


[~ichattopadhyaya] [~noble.paul] could you please answer Steve's questions 
here? Thanks

> Extract out jackson calls into an overrideable method in 
> DelegationTokenAuthenticationHandler
> -
>
> Key: HADOOP-13672
> URL: https://issues.apache.org/jira/browse/HADOOP-13672
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: HADOOP-13672.patch, HADOOP-13672.patch
>
>
> In Apache Solr, we use hadoop-auth for delegation tokens. However, because of 
> the following lines, we need to import Jackson (old version).
> https://github.com/apache/hadoop/blob/branch-2.7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java#L279
> If we could extract out the calls to ObjectMapper to another method, so that 
> at Solr we could override it to do the Map -> json conversion using noggit, 
> it would be helpful.
> Reference: SOLR-9542



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13704) S3A getContentSummary() to move to listFiles(recursive) to count children; instrument use

2016-10-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13704:
---

 Summary: S3A getContentSummary() to move to listFiles(recursive) 
to count children; instrument use
 Key: HADOOP-13704
 URL: https://issues.apache.org/jira/browse/HADOOP-13704
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran


Hive and a bit of Spark use {{getContentSummary()} to get some summary stats of 
a filesystem. This is very expensive on S3A (and any other object store), 
especially as the base implementation does the recursive tree walk.

Because of HADOOP-13208, we have a full enumeration of files under a path 
without directory costs...S3A can/should switch to this to speed up those 
places where the operation is called.

Also
* API call needs FS spec and contract tests
* S3A could instrument invocation, so as to enable real-world popularity to be 
measured




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13700) Incompatible changes in TrashPolicy

2016-10-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562734#comment-15562734
 ] 

Allen Wittenauer commented on HADOOP-13700:
---

bq. without any shipping ASF Hadoop release

Ahh. I see!  Good catch. Timeline is important here.  2.8.0 was supposed to be 
out months and months ago. With that branch hitting it's 1 year anniversary 
soon and the increasing likelihood of it being abandoned/effectively DOA, yeah, 
I can see that we might want a different approach.  We should probably audit 
all of the 2.8.0 deprecations vs. the 3.0.0 removals to see how bad the damage 
currently is.

> Incompatible changes in TrashPolicy 
> 
>
> Key: HADOOP-13700
> URL: https://issues.apache.org/jira/browse/HADOOP-13700
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Andrew Wang
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
>
> TrashPolicy is marked as public & evolving, but its public API, specifically 
> TrashPolicy.getInstance() has been changed in an incompatible way. 
> 1) The path parameter is removed in 3.0
> 2) A new IOException is thrown in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13702) Add a new instrumented read-write lock

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562716#comment-15562716
 ] 

Hadoop QA commented on HADOOP-13702:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13702 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832485/HDFS-10924-5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19c218730351 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cef61d5 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10721/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10721/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10721/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add a new instrumented read-write lock
> --
>
> Key: HADOOP-13702
> URL: 

[jira] [Updated] (HADOOP-13702) Add a new instrumented read-write lock

2016-10-10 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HADOOP-13702:
--
Attachment: HDFS-10924-5.patch

Re-attach the patch V5.

> Add a new instrumented read-write lock
> --
>
> Key: HADOOP-13702
> URL: https://issues.apache.org/jira/browse/HADOOP-13702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-10924-2.patch, HDFS-10924-3.patch, 
> HDFS-10924-4.patch, HDFS-10924-5.patch, HDFS-10924.patch
>
>
> Add a new instrumented read-write lock in hadoop common, so that the 
> HDFS-9668 can use this to improve the locking in FsDatasetImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562424#comment-15562424
 ] 

Hadoop QA commented on HADOOP-13703:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 24 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
26s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 29s{color} | {color:orange} root: The patch generated 33 new + 47 unchanged 
- 3 fixed = 80 total (was 50) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 62 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.8.0_101 with JDK 
v1.8.0_101 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.7.0_111 with JDK 
v1.7.0_111 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 14s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Updated] (HADOOP-13702) Add a new instrumented read-write lock

2016-10-10 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HADOOP-13702:
--
Attachment: (was: HDFS-10924-5.patch)

> Add a new instrumented read-write lock
> --
>
> Key: HADOOP-13702
> URL: https://issues.apache.org/jira/browse/HADOOP-13702
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-10924-2.patch, HDFS-10924-3.patch, 
> HDFS-10924-4.patch, HDFS-10924.patch
>
>
> Add a new instrumented read-write lock in hadoop common, so that the 
> HDFS-9668 can use this to improve the locking in FsDatasetImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13702) Add a new instrumented read-write lock

2016-10-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562193#comment-15562193
 ] 

Hadoop QA commented on HADOOP-13702:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 28s{color} | {color:orange} root: The patch generated 2 new + 0 unchanged - 
0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 11s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HADOOP-13702 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12832445/HDFS-10924-5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7f7b131b780f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / af50da3 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10719/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10719/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Attachment: HADOOP-13560-branch-2-010.patch

Patch 010; last PR update

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
> Attachments: HADOOP-13560-branch-2-010.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13703:

Assignee: Steve Loughran
  Status: Patch Available  (was: Open)

> S3ABlockOutputStream to pass Yetus & Jenkins
> 
>
> Key: HADOOP-13703
> URL: https://issues.apache.org/jira/browse/HADOOP-13703
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-010.patch
>
>
> The HADOOP-13560 patches and PR has got yetus confused. This patch is purely 
> to do test runs.
> h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
> Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562124#comment-15562124
 ] 

Steve Loughran edited comment on HADOOP-13560 at 10/10/16 12:14 PM:


Moving patch testing to HADOOP-13703


was (Author: ste...@apache.org):
Moving patch testing to HADOOP-13204

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562124#comment-15562124
 ] 

Steve Loughran commented on HADOOP-13560:
-

Moving patch testing to HADOOP-13204

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins

2016-10-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13703:
---

 Summary: S3ABlockOutputStream to pass Yetus & Jenkins
 Key: HADOOP-13703
 URL: https://issues.apache.org/jira/browse/HADOOP-13703
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.3
Reporter: Steve Loughran


The HADOOP-13560 patches and PR has got yetus confused. This patch is purely to 
do test runs.

h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull 
Request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12387) Improve S3AFastOutputStream memory management

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12387.
-
Resolution: Duplicate
  Assignee: Steve Loughran  (was: Thomas Demoor)

> Improve S3AFastOutputStream memory management
> -
>
> Key: HADOOP-12387
> URL: https://issues.apache.org/jira/browse/HADOOP-12387
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Thomas Demoor
>Assignee: Steve Loughran
>
> The use of ByteArrayOutputStream causes unexpected memory copies that causes 
> unintuitive memory usage for certain combinations of the configuration 
> parameters. Replacing it by more fine grained memory buffer management would 
> fix this, reduce memory usage and eliminate a memory copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Patch Available  (was: Open)

Yetus isn't picking this up. as well as resubmitting, i'm going to create 
another JIRA with the patches coming in as .patch files, rather than a PR

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Priority: Major  (was: Minor)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes

2016-10-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13560:

Status: Open  (was: Patch Available)

> S3ABlockOutputStream to support huge (many GB) file writes
> --
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13560-branch-2-001.patch, 
> HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, 
> HADOOP-13560-branch-2-004.patch
>
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights 
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really 
> works
> 2. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very 
> large commit operations for committers using rename



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13696) change hadoop-common dependency scope of jsch to provided.

2016-10-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562107#comment-15562107
 ] 

Hudson commented on HADOOP-13696:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10577 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10577/])
HADOOP-13696. change hadoop-common dependency scope of jsch to provided. 
(stevel: rev cef61d505e289f074130cc3981c20f7692437cee)
* (edit) hadoop-common-project/hadoop-common/pom.xml


> change hadoop-common dependency scope of jsch to provided.
> --
>
> Key: HADOOP-13696
> URL: https://issues.apache.org/jira/browse/HADOOP-13696
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Yuanbo Liu
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13696.001.patch
>
>
> The dependency on jsch in Hadoop common is "compile", so it gets everywhere 
> downstream. Marking it as "provided" would mean that it would only be needed 
> by those programs which wanted the SFTP filesystem, and, if they wanted to 
> use a different jsch version, there'd be no maven problems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12774) s3a should use UGI.getCurrentUser.getShortname() for username

2016-10-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15562069#comment-15562069
 ] 

ASF GitHub Bot commented on HADOOP-12774:
-

Github user steveloughran commented on the issue:

https://github.com/apache/hadoop/pull/136
  
 Patch 002: create a fake user, create an FS and verify that the user and 
owner on the root path is that username


> s3a should use UGI.getCurrentUser.getShortname() for username
> -
>
> Key: HADOOP-12774
> URL: https://issues.apache.org/jira/browse/HADOOP-12774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> S3a users {{System.getProperty("user.name")}} to get the username for the 
> homedir. This is wrong, as it doesn't work on a YARN app where the identity 
> is set by HADOOP_USER_NAME, or in a doAs clause.
> Obviously, {{UGI.getCurrentUser.getShortname()}} provides that name, 
> everywhere. 
> This is a simple change in the source, though testing is harder ... probably 
> best to try in a doAs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >