[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-10 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862300#comment-15862300
 ] 

Junping Du commented on HADOOP-13866:
-

Hi folks, sorry for coming late. HADOOP-14043 sounds like a reasonable work but 
I doubt it won't happen soon (even no assignee so far). The upgrade of Netty 
from 4.0.23 to 4.1.1 or higher should be OK for branch-2 just like we upgrade 
from Netty 3 to Netty 4 in hadoop 2.7. Isn't it? We also did some similar 
upgrade to other dependent jars. If so, I think we can go ahead without 
worrying about shade if it really block HBase 2.0 - if it is a stable release, 
it should at least work with latest stable Hadoop release. Thoughts?
BTW, 2.8.0 is blocked by several other issues now, like: YARN-6143, HDFS-11379, 
etc. So if this patch can go within next week, we can still get into 2.8.0 
release and make HBase 2.0 to work with Hadoop 2.8.0. :)

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862209#comment-15862209
 ] 

Hadoop QA commented on HADOOP-14075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} root: The patch generated 0 new + 196 unchanged - 1 
fixed = 196 total (was 197) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 21s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}203m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.fs.viewfs.TestViewFsTrash |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852171/HADOOP-14075.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6ec3b64e472a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 07a5184 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Work started] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14075 started by Attila Bukor.
-
> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-14075:
--
Attachment: HADOOP-14075.002.patch

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-14075:
--
Status: Patch Available  (was: In Progress)

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-14075:
--
Status: Open  (was: Patch Available)

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862072#comment-15862072
 ] 

Hadoop QA commented on HADOOP-14075:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 38s{color} | {color:orange} root: The patch generated 1 new + 196 unchanged 
- 1 fixed = 197 total (was 197) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
47s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.mover.TestMover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14075 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852123/HADOOP-14075.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dc99d5dd35fa 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 07a5184 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11609/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11609/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11609/testReport/ |
| 

[jira] [Created] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-10 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14076:
---

 Summary: Allow Configuration to be persisted given path to file
 Key: HADOOP-14076
 URL: https://issues.apache.org/jira/browse/HADOOP-14076
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


Currently Configuration has the following methods for persistence:
{code}
  public void writeXml(OutputStream out) throws IOException {

  public void writeXml(Writer out) throws IOException {
{code}
Adding API for persisting to file given path would be useful:
{code}
  public void writeXml(String path) throws IOException {
{code}

Background: I recently worked on exporting Configuration to a file using JNI.
Without the proposed API, I resorted to some trick such as the following:
http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861914#comment-15861914
 ] 

Wei-Chiu Chuang commented on HADOOP-14075:
--

Looks good to me. I am actually surprised chown/chgrp allows user/group names 
with space in Windows though.

[~aw] could you take a look at this? Want to get your opinion on extending 
allowed character set for user name.

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14026) start-build-env.sh: invalid docker image name

2017-02-10 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861887#comment-15861887
 ] 

Daniel Templeton commented on HADOOP-14026:
---

LGTM.  +1.  I'll wait a little while to commit to give [~aw] a chance to 
respond.

> start-build-env.sh: invalid docker image name
> -
>
> Key: HADOOP-14026
> URL: https://issues.apache.org/jira/browse/HADOOP-14026
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Gergő Pásztor
>Assignee: Gergő Pásztor
> Attachments: HADOOP-14026_v1.patch, HADOOP-14026_v2.patch
>
>
> start-build-env.sh using the current user name to generate a docker image 
> name. But the current user name can contains some not english characters and 
> upper letters (after all this is usually the name/nickname of the owner). 
> Both of them are not supported in docker image names, so the script will fail.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HADOOP-14075:
---

Assignee: Attila Bukor

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-14075:

Status: Patch Available  (was: Open)

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
> Attachments: HADOOP-14075.001.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-14075:
--
Attachment: HADOOP-14075.001.patch

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Attila Bukor
> Attachments: HADOOP-14075.001.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-10 Thread Attila Bukor (JIRA)
Attila Bukor created HADOOP-14075:
-

 Summary: chown doesn't work with usernames containing '\' character
 Key: HADOOP-14075
 URL: https://issues.apache.org/jira/browse/HADOOP-14075
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Attila Bukor


Usernames containing backslash (e.g. down-level logon names) seem to work fine 
with Hadoop, except for chown.

{code}
$ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
$ hdfs dfs -ls /test
Found 1 items
drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
/test/testfile1
$ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
$ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
-chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
$
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861747#comment-15861747
 ] 

Hadoop QA commented on HADOOP-14035:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 296 unchanged - 1 fixed = 298 total (was 297) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 55s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14035 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852102/HADOOP-14035.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 49a066216cfc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 07a5184 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11608/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11608/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11608/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11608/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
>   

[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-02-10 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861704#comment-15861704
 ] 

Lei (Eddy) Xu commented on HADOOP-13075:


Hi, [~moist]

The lastest patch can not be compiled. You need to change file names of 
{{ITestS3AEncryptionBlockOutputStream.java}} and {{ITestS3AEncryption.java}} to 
be consistent with the classes they have.



> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-10 Thread Seth Fitzsimmons (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861684#comment-15861684
 ] 

Seth Fitzsimmons commented on HADOOP-14071:
---

This is short-haul (EC2 in us-east-1 to S3 in us-standard).

> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1114)
> at 
> 

[jira] [Commented] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader

2017-02-10 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861672#comment-15861672
 ] 

Sangjin Lee commented on HADOOP-13398:
--

Can I get some feedback or review on this one? Perhaps I could write a short 
document that explains the changes? I know this is not a common (or popular) 
area for review, but it'd go a long way if I can get some interest on this. 
Thanks!

> prevent user classes from loading classes in the parent classpath with 
> ApplicationClassLoader
> -
>
> Key: HADOOP-13398
> URL: https://issues.apache.org/jira/browse/HADOOP-13398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: HADOOP-13398-HADOOP-13070.01.patch, 
> HADOOP-13398-HADOOP-13070.02.patch, HADOOP-13398-HADOOP-13070.03.patch, 
> HADOOP-13398-HADOOP-13070.04.patch
>
>
> Today, a user class is able to trigger loading a class from Hadoop's 
> dependencies, with or without the use of {{ApplicationClassLoader}}, and it 
> creates an implicit dependence from users' code on Hadoop's dependencies, and 
> as a result dependency conflicts.
> We should modify {{ApplicationClassLoader}} to prevent a user class from 
> loading a class from the parent classpath.
> This should also cover resource loading (including 
> {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-02-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-14035:
-
Status: Patch Available  (was: Open)

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14055) SwiftRestClient includes pass length in exception if auth fails

2017-02-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861657#comment-15861657
 ] 

Hudson commented on HADOOP-14055:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11232 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11232/])
HADOOP-14055. SwiftRestClient includes pass length in exception if auth (arp: 
rev 2b7a7bbe0f2ad0b3434c4dcf1f60051920d5b532)
* (edit) 
hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/auth/PasswordCredentials.java


> SwiftRestClient includes pass length in exception if auth fails 
> 
>
> Key: HADOOP-14055
> URL: https://issues.apache.org/jira/browse/HADOOP-14055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Marcell Hegedus
>Assignee: Marcell Hegedus
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-14055.01.patch, HADOOP-14055.02.patch
>
>
> SwiftRestClient.exec(M method) throws SwiftAuthenticationFailedException if 
> auth fails and its message will contain the pass length that may leak into 
> logs.
> Fix is trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14035) Reduce fair call queue backoff's impact on clients

2017-02-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-14035:
-
Attachment: HADOOP-14035.patch

Wrapped rpc server exception + retriable into a CallQueueOverflowException 
exception.  It's an IllegalStateException to conform to the BlockingQueue api.

CallQueueManager conforms to BlockingQueue interface.  Backoff logic pushed 
down from ipc server into CQM.  CQM's put decides whether to call managed 
queue's put or add based on backoff.

Server simply calls CQM.put.  Catches overflow exceptions and unwraps the 
RpcServerException/RetriableException.  Rethrows to leverage prior changes to 
ipc layer to selectively close connections.

FCQ put remains unchanged.  Add, which CQM calls if backoff is enabled,  will 
offer to all queues, upon overflow it throws an overflow exception.  For the 
lowest priority calls, the overflow retriable closes the connection.  
Non-lowest priority calls, the overflow retriable leaves the connection open.

> Reduce fair call queue backoff's impact on clients
> --
>
> Key: HADOOP-14035
> URL: https://issues.apache.org/jira/browse/HADOOP-14035
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 2.7.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HADOOP-14035.patch
>
>
> When fcq backoff is enabled and an abusive client overflows the call queue, 
> its connection is closed, as well as subsequent good client connections.   
> Disconnects are very disruptive, esp. to multi-threaded clients with multiple 
> outstanding requests, or clients w/o a retry proxy (ex. datanodes).
> Until the abusive user is downgraded to a lower priority queue, 
> disconnect/reconnect mayhem occurs which significantly degrades performance.  
> Server metrics look good despite horrible client latency.
> The fcq should utilize selective ipc disconnects to avoid pushback 
> disconnecting good clients.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14055) SwiftRestClient includes pass length in exception if auth fails

2017-02-10 Thread Marcell Hegedus (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861644#comment-15861644
 ] 

Marcell Hegedus commented on HADOOP-14055:
--

Thanks [~arpitagarwal]

> SwiftRestClient includes pass length in exception if auth fails 
> 
>
> Key: HADOOP-14055
> URL: https://issues.apache.org/jira/browse/HADOOP-14055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Marcell Hegedus
>Assignee: Marcell Hegedus
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-14055.01.patch, HADOOP-14055.02.patch
>
>
> SwiftRestClient.exec(M method) throws SwiftAuthenticationFailedException if 
> auth fails and its message will contain the pass length that may leak into 
> logs.
> Fix is trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14068) Add integration test version of TestMetadataStore for DynamoDB

2017-02-10 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14068:
---
Attachment: HADOOP-14068-HADOOP-13345.001.patch

Attaching a first patch that extends TestDynamoDBMetadataStore and allows the 
same tests to run against the real DynamoDB service.

I still have a few things to clean up, but at this point all the tests pass in 
the real environment too, and they're run against a local instance when just 
running unit tests, and against the remote instance when running integration 
tests (note that the latter takes about ~10 minutes because it drops and 
recreates the table for every test).

> Add integration test version of TestMetadataStore for DynamoDB
> --
>
> Key: HADOOP-14068
> URL: https://issues.apache.org/jira/browse/HADOOP-14068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14068-HADOOP-13345.001.patch
>
>
> I tweaked TestDynamoDBMetadataStore to run against the actual Amazon DynamoDB 
> service (as opposed to the "local" edition). Several tests failed because of 
> minor variations in behavior. I think the differences that are clearly 
> possible are enough to warrant extending that class as an ITest (but 
> obviously keeping the existing test so 99% of the the coverage remains even 
> when not configured for actual DynamoDB usage).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14055) SwiftRestClient includes pass length in exception if auth fails

2017-02-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-14055:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks for the contribution 
[~marcellhegedus].

> SwiftRestClient includes pass length in exception if auth fails 
> 
>
> Key: HADOOP-14055
> URL: https://issues.apache.org/jira/browse/HADOOP-14055
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Marcell Hegedus
>Assignee: Marcell Hegedus
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: HADOOP-14055.01.patch, HADOOP-14055.02.patch
>
>
> SwiftRestClient.exec(M method) throws SwiftAuthenticationFailedException if 
> auth fails and its message will contain the pass length that may leak into 
> logs.
> Fix is trivial.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14074) --idle_query_timeout does not work in Impala

2017-02-10 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved HADOOP-14074.
---
Resolution: Invalid
  Assignee: Daniel Templeton

Please file this issue at https://issues.cloudera.org/browse/IMPALA

> --idle_query_timeout does not work in Impala
> 
>
> Key: HADOOP-14074
> URL: https://issues.apache.org/jira/browse/HADOOP-14074
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: site, 1.3.0
> Environment: Oracle's Big data appliance using CDH 5.7.0
>Reporter: Naveen Kumar
>Assignee: Daniel Templeton
>
> When a user submits query from hue Impala this query resids in memory even 
> after user gets his results. In this case simple select * query's running 
> since 13hrs and using some memory. when these queries are killed manually 
> then the memory is freed up. To avoid this I setup --idle_query_timeout=1800 
> but I dont see any change after configuring this parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14074) --idle_query_timeout does not work in Impala

2017-02-10 Thread Naveen Kumar (JIRA)
Naveen Kumar created HADOOP-14074:
-

 Summary: --idle_query_timeout does not work in Impala
 Key: HADOOP-14074
 URL: https://issues.apache.org/jira/browse/HADOOP-14074
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: site, 1.3.0
 Environment: Oracle's Big data appliance using CDH 5.7.0
Reporter: Naveen Kumar


When a user submits query from hue Impala this query resids in memory even 
after user gets his results. In this case simple select * query's running since 
13hrs and using some memory. when these queries are killed manually then the 
memory is freed up. To avoid this I setup --idle_query_timeout=1800 but I dont 
see any change after configuring this parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14073) Document default HttpServer2 servlets

2017-02-10 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14073:

Description: 
Since many components (NN Web UI, YARN RM/JH, KMS, HttpFS, etc) now use 
HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, 
/logs, and /static, it'd nice to have an independent markdown doc to describe 
authentication and authorization of these servlets. The docs for the related 
components can just link to this markdown doc.

Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132.

I also made a poor attempt in 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129
 and 
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm#L153-L197.

  was:
Since many components (NN Web UI, YARN AHS, KMS, HttpFS, etc) now use 
HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, 
/logs, and /static, it'd nice to have an independent markdown doc to describe 
authentication and authorization of these servlets. Related components can just 
link to this markdown doc.

Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132.

I also made a poor attempt in 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129.


> Document default HttpServer2 servlets
> -
>
> Key: HADOOP-14073
> URL: https://issues.apache.org/jira/browse/HADOOP-14073
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Priority: Minor
>
> Since many components (NN Web UI, YARN RM/JH, KMS, HttpFS, etc) now use 
> HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, 
> /logs, and /static, it'd nice to have an independent markdown doc to describe 
> authentication and authorization of these servlets. The docs for the related 
> components can just link to this markdown doc.
> Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
> https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132.
> I also made a poor attempt in 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129
>  and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/site/markdown/ServerSetup.md.vm#L153-L197.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14073) Document default HttpServer2 servlets

2017-02-10 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14073:
---

 Summary: Document default HttpServer2 servlets
 Key: HADOOP-14073
 URL: https://issues.apache.org/jira/browse/HADOOP-14073
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0-alpha3
Reporter: John Zhuge
Priority: Minor


Since many components (NN Web UI, YARN AHS, KMS, HttpFS, etc) now use 
HttpServer2 which provides default servlets /conf, /jmx, /logLevel, /stacks, 
/logs, and /static, it'd nice to have an independent markdown doc to describe 
authentication and authorization of these servlets. Related components can just 
link to this markdown doc.

Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132.

I also made a poor attempt in 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-kms/src/site/markdown/index.md.vm#L1086-L1129.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14060) KMS /logs servlet should have access control

2017-02-10 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861424#comment-15861424
 ] 

John Zhuge commented on HADOOP-14060:
-

Good summary of HTTP servlets by [~yuanbo] for HADOOP-13119: 
https://issues.apache.org/jira/browse/HADOOP-13119?focusedCommentId=15635132=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15635132

> KMS /logs servlet should have access control
> 
>
> Key: HADOOP-14060
> URL: https://issues.apache.org/jira/browse/HADOOP-14060
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> HADOOP-14047 makes KMS call {{HttpServer2#setACL}}. Access control works fine 
> for /conf, /jmx, /logLevel, and /stacks, but not for /logs.
> The code in {{AdminAuthorizedServlet#doGet}} for /logs and 
> {{ConfServlet#doGet}} for /conf are quite similar. This makes me believe that 
> /logs should subject to the same access control as intended by the original 
> developer.
> IMHO this could either be my misconfiguration or there is a bug somewhere in 
> {{HttpServer2}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861412#comment-15861412
 ] 

Steve Loughran commented on HADOOP-14071:
-

That AWS SDK issue does look relevant.

For the specific case of data source being a file, we could have the MPU 
request using that, rather than opening it ourselves. Will need some changes in 
the code, as currently the BlockOutputStream assumes the source is always some 
input stream...it'll have to support the option of a File, and if supplied, 
prefer that as the upload option.

> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at 

[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-10 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861404#comment-15861404
 ] 

Thomas Demoor commented on HADOOP-14071:


For the ByteArrayInputStream & ByteBufferInputStream I don't think we currently 
set  {{request.getRequestClientOptions().setReadLimit}}. My understanding is 
that, based on the above, we should do this. Is that correct?


> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
> at 
> 

[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-10 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861390#comment-15861390
 ] 

Eric Yang commented on HADOOP-13119:


Yuanbo, I would recommend to open a new JIRA for the new problem that you 
found.  The original JIRA did not mention about /jmx and there are good reason 
to keep jmx readable by system users only.  For example, Ambari metrics system 
suppose to have access to /jmx to collect stats.  ams is not impersonating to 
access /jmx.  In this case, it should fall back to reporting ams as remote 
user.  Some links should be treated differently when they are system facing vs 
user facing.  Let's not mutate the JIRA for the newly founded use case.  Thank 
you

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 3.0.0-alpha2, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-13119.
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2

Re-resolving this as this was committed to 3.0.0-alpha2 as well, despite it 
missing from the fix field.  Since it's already been committed and released, we 
can't revert it or re-open this JIRA.

You'll need to open a new JIRA with a code fix.

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1, 3.0.0-alpha2
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-13767) Aliyun Connection broken when idle then 1 minutes or build than 3 hours

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13767 stopped by Genmao Yu.
--
> Aliyun Connection broken when idle then 1 minutes or build than 3 hours
> ---
>
> Key: HADOOP-13767
> URL: https://issues.apache.org/jira/browse/HADOOP-13767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13767) Aliyun Connection broken when idle then 1 minutes or build than 3 hours

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13767 started by Genmao Yu.
--
> Aliyun Connection broken when idle then 1 minutes or build than 3 hours
> ---
>
> Key: HADOOP-13767
> URL: https://issues.apache.org/jira/browse/HADOOP-13767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-10 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861314#comment-15861314
 ] 

Genmao Yu commented on HADOOP-13769:


cc [~drankye]

> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch, HADOOP-13769.002.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861311#comment-15861311
 ] 

Genmao Yu commented on HADOOP-14072:


cc [~drankye]

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> In seek function, we need to update the partRemaining when the seeking 
> position is located in downloaded part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861309#comment-15861309
 ] 

Genmao Yu commented on HADOOP-14072:


result of hadoop-aliyun unit tests:

{code}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Tests run: 10, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 8.349 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.045 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.439 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp
Running 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.429 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.245 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.22 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.439 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.276 sec - in 
org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir
Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.108 sec - 
in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.19 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.106 sec - 
in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.443 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.97 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream
Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.467 sec - in 
org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream

Results :

Tests run: 141, Failures: 0, Errors: 0, Skipped: 2
{code}

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> In seek function, we need to update the partRemaining when the seeking 
> position is located in downloaded part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861303#comment-15861303
 ] 

Hadoop QA commented on HADOOP-13769:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13769 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852052/HADOOP-13769.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux f661a2bbb244 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a0a0a4 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11607/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11607/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch, HADOOP-13769.002.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861282#comment-15861282
 ] 

Hadoop QA commented on HADOOP-14072:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14072 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852049/HADOOP-14072.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7de4c4fd71fa 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3a0a0a4 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11606/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11606/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>  

[jira] [Updated] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14072:
---
Description: 
{code}
public synchronized void seek(long pos) throws IOException {
checkNotClosed();
if (position == pos) {
  return;
} else if (pos > position && pos < position + partRemaining) {
  AliyunOSSUtils.skipFully(wrappedStream, pos - position);
  position = pos;
} else {
  reopen(pos);
}
  }
{code}

In seek function, we need to update the partRemaining when the seeking position 
is located in downloaded part.

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> In seek function, we need to update the partRemaining when the seeking 
> position is located in downloaded part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13769:
---
Attachment: HADOOP-13769.002.patch

> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch, HADOOP-13769.002.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13769:
---
Description: 
 -AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
object, potentially transferring a lot of bytes from OSS that are discarded.-

just update oss sdk version. It fix many bugs including this " 
inputstream.close()" perfomance issue.

  was:
 -AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
object, potentially transferring a lot of bytes from OSS that are discarded.-

just update oss sdk version.


> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13769:
---
Description: 
 -AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
object, potentially transferring a lot of bytes from OSS that are discarded.-

just update oss sdk version.

  was:AliyunOSS object inputstream.close() will read the remaining bytes of the 
OSS object, potentially transferring a lot of bytes from OSS that are discarded.


> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-13769:
---
Summary: AliyunOSS: update oss sdk version  (was: AliyunOSS: improve 
performance on close)

> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch
>
>
> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14072:
---
Status: Patch Available  (was: In Progress)

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Genmao Yu updated HADOOP-14072:
---
Attachment: HADOOP-14072.001.patch

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-14072 started by Genmao Yu.
--
> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-10 Thread Genmao Yu (JIRA)
Genmao Yu created HADOOP-14072:
--

 Summary: AliyunOSS: Failed to read from stream when seek beyond 
the download size.
 Key: HADOOP-14072
 URL: https://issues.apache.org/jira/browse/HADOOP-14072
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/oss
Affects Versions: 3.0.0-alpha2
Reporter: Genmao Yu
Assignee: Genmao Yu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15861215#comment-15861215
 ] 

Steve Loughran commented on HADOOP-14071:
-


What's happened is that the HTTP connection failed (is this long haul?), aws 
tried to reset the pointer (using mark/reset) and the buffer couldn't go back 
that far.


We saw this before, thought I'd eliminated it by not buffering the input 
stream, but instead sending the file input stream up direct. I'll review that 
code.

It may be we need to address this differently, simply by recognising the 
specific exception and retrying to send the block. That is: we implement the 
retry logic.

> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at 

[jira] [Commented] (HADOOP-14058) Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks

2017-02-10 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860998#comment-15860998
 ] 

Yiqun Lin commented on HADOOP-14058:


{quote}
what is important is that you need to confirm that you've tested against an 
object store:
{quote}
Hi Steve, I am not able to test against an object store in my local now. Feel 
free to assign this JIRA to yourself and make this JIRA go ahead.

> Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks
> ---
>
> Key: HADOOP-14058
> URL: https://issues.apache.org/jira/browse/HADOOP-14058
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: s3
> Attachments: HADOOP-14058.001.patch, 
> HADOOP-14058-HADOOP-13345.001.patch
>
>
> In NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks, 
> {code}
>   else if (i == 3) {
> // test both markers
> store.storeEmptyFile(base + "_$folder$");
> store.storeEmptyFile(base + "/dir_$folder$");
> store.storeEmptyFile(base + "/");
> store.storeEmptyFile(base + "/dir/");
>   }
> {code}
> the above test code is not executed. In the following code:
> {code}
> for (int i = 0; i < 3; i++) {
> {code}
> < should be <=.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13119) Web UI error accessing links which need authorization when Kerberos

2017-02-10 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860906#comment-15860906
 ] 

Yuanbo Liu commented on HADOOP-13119:
-

It would be great if any committer can help me revert my patch so that I can 
provide a new patch for this issue. Thanks in advance!

> Web UI error accessing links which need authorization when Kerberos
> ---
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>Assignee: Yuanbo Liu
>  Labels: security
> Fix For: 2.7.4, 2.8.1
>
> Attachments: HADOOP-13119.001.patch, HADOOP-13119.002.patch, 
> HADOOP-13119.003.patch, HADOOP-13119.004.patch, HADOOP-13119.005.patch, 
> HADOOP-13119.005.patch, screenshot-1.png
>
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org