[jira] [Updated] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-02-14 Thread Steven Rand (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Rand updated HADOOP-14062:
-
Attachment: HADOOP-14062-branch-2.8.0.005.patch

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062-branch-2.8.0.004.patch, HADOOP-14062-branch-2.8.0.005.patch, 
> yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> 

[jira] [Commented] (HADOOP-13924) Update checkstyle and checkstyle plugin version to handle indentation of JDK8 Lambdas

2017-02-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867395#comment-15867395
 ] 

Akira Ajisaka commented on HADOOP-13924:


Sorry, I've committed with the wrong jira number.
{noformat}
commit 1e11080b7825a2d0bafce91432009f585b7b5d21
Author: Akira Ajisaka 
Date:   Wed Feb 15 16:33:30 2017 +0900

HADOOP-13942. Update checkstyle and checkstyle plugin version to handle 
indentation of JDK8 Lambdas.
{noformat}
HADOOP-13942 should be HADOOP-13924.


> Update checkstyle and checkstyle plugin version to handle indentation of JDK8 
> Lambdas
> -
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13924) Update checkstyle and checkstyle plugin version to handle indentation of JDK8 Lambdas

2017-02-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13924:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~xyao] for the review!

> Update checkstyle and checkstyle plugin version to handle indentation of JDK8 
> Lambdas
> -
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13924) Update checkstyle and checkstyle plugin version to handle indentation of JDK8 Lambdas

2017-02-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13924:
---
Summary: Update checkstyle and checkstyle plugin version to handle 
indentation of JDK8 Lambdas  (was: Update checkstyle plugin to handle 
indentation of JDK8 Lambdas)

> Update checkstyle and checkstyle plugin version to handle indentation of JDK8 
> Lambdas
> -
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867315#comment-15867315
 ] 

Xiaoyu Yao commented on HADOOP-13924:
-

[~ajisakaa], thanks for the confirmation. +1

> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-14 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867172#comment-15867172
 ] 

Eric Yang commented on HADOOP-14077:


Should we be concerned about the test regression?

https://builds.apache.org/job/PreCommit-HADOOP-Build/11615/testReport/org.apache.hadoop.net/TestDNS/testNullDnsServer/

There are problems with the style check, could you fix the spacing?  Thanks

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867092#comment-15867092
 ] 

Akira Ajisaka commented on HADOOP-13924:


bq. One question: does the newer check style version already contains rule to 
cover length exception for package and import?
Yes, it does. The rule was added by the commit 
(https://github.com/checkstyle/checkstyle/commit/9a39d19a31f06c8614d33fcc9c3f7654ec9cdd9f).

> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder

2017-02-14 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15867080#comment-15867080
 ] 

Kai Sasaki commented on HADOOP-13665:
-

[~jojochuang] Very sorry for late response and thanks for pinging. 

{quote}
So adding a new codec like NATIVE_XOR_CODEC_NAME doesn't make sense.
{quote}
I'll remove native codec configuration to make it transparently. But how can we 
specify native coder? If Java implementation is always preferred over native 
coder, there is no way to use native coder, I think. Does it mean it is 
unnecessary fallback any coder?

> Erasure Coding codec should support fallback coder
> --
>
> Key: HADOOP-13665
> URL: https://issues.apache.org/jira/browse/HADOOP-13665
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Kai Sasaki
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, 
> HADOOP-13665.03.patch, HADOOP-13665.04.patch
>
>
> The current EC codec supports a single coder only (by default pure Java 
> implementation). If the native coder is specified but is unavailable, it 
> should fallback to pure Java implementation.
> One possible solution is to follow the convention of existing Hadoop native 
> codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
> fallback by specifying two or multiple coders as the value of property, and 
> loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-14 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866893#comment-15866893
 ] 

Chris Douglas commented on HADOOP-14076:


I think Steve is suggesting that this belongs with the application code, not as 
part of Hadoop.

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14041) CLI command to prune old metadata

2017-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866676#comment-15866676
 ] 

Hadoop QA commented on HADOOP-14041:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
35s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
41s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 12s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852640/HADOOP-14041-HADOOP-13345.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 8a2f8aebbd12 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 2c3f575 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11625/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 

[jira] [Resolved] (HADOOP-13486) Method invocation in log can be replaced by variable because the variable's toString method contain more info

2017-02-14 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor resolved HADOOP-13486.
---
Resolution: Duplicate

> Method invocation in log can be replaced by variable because the variable's 
> toString method contain more info 
> --
>
> Key: HADOOP-13486
> URL: https://issues.apache.org/jira/browse/HADOOP-13486
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Nemo Chen
>Assignee: Attila Bukor
>  Labels: easyfix, easytest
>
> Similar to the fix in HADOOP-6419, in file:
> hadoop-rel-release-2.7.2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
> {code}
> Connection c = (Connection)key.attachment();
> ...
> LOG.info(Thread.currentThread().getName() + ": readAndProcess from client " + 
> c.getHostAddress() + " threw exception [" + e + "]", (e instanceof 
> WrappedRpcServerException) ? null : e);
> ...
> {code}
> in class Connection, the toString method contains both getHostAddress() and 
> remotePort
> {code}
> public String toString() {
>   return getHostAddress() + ":" + remotePort; 
> }
> {code}
> Therefore the c.getHostAddress() should be replaced by c for simplicity and 
> information wise.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14062) ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when RPC privacy is enabled

2017-02-14 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866489#comment-15866489
 ] 

Jian He commented on HADOOP-14062:
--

[~Steven Rand], for test case, instead of copying all TestAMRMClient, could you 
add one test inside TestAMRMClient which only does what is minimally required ?
Also, please cut the long comment into 2 lines as that exceeds the usual 80 
column limit. 

> ApplicationMasterProtocolPBClientImpl.allocate fails with EOFException when 
> RPC privacy is enabled
> --
>
> Key: HADOOP-14062
> URL: https://issues.apache.org/jira/browse/HADOOP-14062
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Steven Rand
>Priority: Critical
> Attachments: HADOOP-14062.001.patch, HADOOP-14062.002.patch, 
> HADOOP-14062-branch-2.8.0.004.patch, yarn-rm-log.txt
>
>
> When privacy is enabled for RPC (hadoop.rpc.protection = privacy), 
> {{ApplicationMasterProtocolPBClientImpl.allocate}} sometimes (but not always) 
> fails with an EOFException. I've reproduced this with Spark 2.0.2 built 
> against latest branch-2.8 and with a simple distcp job on latest branch-2.8.
> Steps to reproduce using distcp:
> 1. Set hadoop.rpc.protection equal to privacy
> 2. Write data to HDFS. I did this with Spark as follows: 
> {code}
> sc.parallelize(1 to (5*1024*1024)).map(k => Seq(k, 
> org.apache.commons.lang.RandomStringUtils.random(1024, 
> "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWxyZ0123456789")).mkString("|")).toDF().repartition(100).write.parquet("hdfs:///tmp/testData")
> {code}
> 3. Attempt to distcp that data to another location in HDFS. For example:
> {code}
> hadoop distcp -Dmapreduce.framework.name=yarn hdfs:///tmp/testData 
> hdfs:///tmp/testDataCopy
> {code}
> I observed this error in the ApplicationMaster's syslog:
> {code}
> 2016-12-19 19:13:50,097 INFO [eventHandlingThread] 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer 
> setup for JobId: job_1482189777425_0004, File: 
> hdfs://:8020/tmp/hadoop-yarn/staging//.staging/job_1482189777425_0004/job_1482189777425_0004_1.jhist
> 2016-12-19 19:13:51,004 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before 
> Scheduling: PendingReds:0 ScheduledMaps:4 ScheduledReds:0 AssignedMaps:0 
> AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 
> HostLocal:0 RackLocal:0
> 2016-12-19 19:13:51,031 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() 
> for application_1482189777425_0004: ask=1 release= 0 newContainers=0 
> finishedContainers=0 resourcelimit= knownNMs=3
> 2016-12-19 19:13:52,043 INFO [RMCommunicator Allocator] 
> org.apache.hadoop.io.retry.RetryInvocationHandler: Exception while invoking 
> ApplicationMasterProtocolPBClientImpl.allocate over null. Retrying after 
> sleeping for 3ms.
> java.io.EOFException: End of File Exception between local host is: 
> "/"; destination host is: "":8030; 
> : java.io.EOFException; For more details see:  
> http://wiki.apache.org/hadoop/EOFException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1486)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1338)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy80.allocate(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> 

[jira] [Updated] (HADOOP-14041) CLI command to prune old metadata

2017-02-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14041:
---
Attachment: HADOOP-14041-HADOOP-13345.005.patch

> CLI command to prune old metadata
> -
>
> Key: HADOOP-14041
> URL: https://issues.apache.org/jira/browse/HADOOP-14041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14041-HADOOP-13345.001.patch, 
> HADOOP-14041-HADOOP-13345.002.patch, HADOOP-14041-HADOOP-13345.003.patch, 
> HADOOP-14041-HADOOP-13345.004.patch, HADOOP-14041-HADOOP-13345.005.patch
>
>
> Add a CLI command that allows users to specify an age at which to prune 
> metadata that hasn't been modified for an extended period of time. Since the 
> primary use-case targeted at the moment is list consistency, it would make 
> sense (especially when authoritative=false) to prune metadata that is 
> expected to have become consistent a long time ago.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14041) CLI command to prune old metadata

2017-02-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866447#comment-15866447
 ] 

Sean Mackrory commented on HADOOP-14041:


I missed the javadoc issue locally. The hadoop-common failures are not related. 
The hadoop-aws failure is something I've seen a lot locally and have mentioned 
elsewhere but it seems no one else was seeing it and occasionally I don't see 
it (no idea how - we use FileStatus all over S3Guard). Removing the assertion 
and not casting to S3AFileStatus in that function makes everything work nicely. 
Has no one else seen this failure?

I'll upload a new patch that addresses the javadoc oversight.

> CLI command to prune old metadata
> -
>
> Key: HADOOP-14041
> URL: https://issues.apache.org/jira/browse/HADOOP-14041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14041-HADOOP-13345.001.patch, 
> HADOOP-14041-HADOOP-13345.002.patch, HADOOP-14041-HADOOP-13345.003.patch, 
> HADOOP-14041-HADOOP-13345.004.patch
>
>
> Add a CLI command that allows users to specify an age at which to prune 
> metadata that hasn't been modified for an extended period of time. Since the 
> primary use-case targeted at the moment is list consistency, it would make 
> sense (especially when authoritative=false) to prune metadata that is 
> expected to have become consistent a long time ago.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder

2017-02-14 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866426#comment-15866426
 ] 

Wei-Chiu Chuang commented on HADOOP-13665:
--

Ping. [~lewuathe] thanks again for the patch. This is a pretty important part 
to make this feature more supportable. Would you be able to revive this work?

> Erasure Coding codec should support fallback coder
> --
>
> Key: HADOOP-13665
> URL: https://issues.apache.org/jira/browse/HADOOP-13665
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Wei-Chiu Chuang
>Assignee: Kai Sasaki
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, 
> HADOOP-13665.03.patch, HADOOP-13665.04.patch
>
>
> The current EC codec supports a single coder only (by default pure Java 
> implementation). If the native coder is specified but is unavailable, it 
> should fallback to pure Java implementation.
> One possible solution is to follow the convention of existing Hadoop native 
> codec, such as transport encryption (see {{CryptoCodec.java}}). It supports 
> fallback by specifying two or multiple coders as the value of property, and 
> loads coders in order.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-14 Thread Attila Bukor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866387#comment-15866387
 ] 

Attila Bukor commented on HADOOP-14075:
---

Okay, I created a JIRA for unifying the handling of the allowed characters in 
the usernames and groupnames and converted this one as a sub-task for it.

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14082) Unify allowed characters in username and groupname

2017-02-14 Thread Attila Bukor (JIRA)
Attila Bukor created HADOOP-14082:
-

 Summary: Unify allowed characters in username and groupname
 Key: HADOOP-14082
 URL: https://issues.apache.org/jira/browse/HADOOP-14082
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Attila Bukor
Assignee: Attila Bukor


Various utilities have their own specific set of allowed characters (e.g. chown 
doesn't allow '\', although logging in with a username containing one and then 
creating a file will work properly with the correct owner). This should be 
unified.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-14 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-14075:
--
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-14082

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.6.0
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14041) CLI command to prune old metadata

2017-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866357#comment-15866357
 ] 

Hadoop QA commented on HADOOP-14041:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
43s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
47s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} hadoop-aws in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.fs.s3a.s3guard.TestDynamoDBMetadataStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14041 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852602/HADOOP-14041-HADOOP-13345.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux c658e2598be4 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 2c3f575 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11624/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
| unit | 

[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-02-14 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866331#comment-15866331
 ] 

Yongjun Zhang commented on HADOOP-11794:


Thanks for the positive feedback [~Lu Tao]!

Hi [~mithun] and [~atm], would you please help taking a look at the latest 
patch to see if all your comments are addressed?

Thanks.


> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866263#comment-15866263
 ] 

Steve Loughran commented on HADOOP-14028:
-

Regarding marker/stream rollback, Thomas [~Thomas Demoor] has suggested calling 
{{getRequestClientOptions().setReadLimit()}} for the memory block streams, to 
say "rollback as far as you want". Thomas: is this what you are thinking: set 
the limit to the block size to indicate that you can go back to the end

{code}
putObjectRequest.getRequestClientOptions().setReadLimit(size);
{code}

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-branch-2-001.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14071.
-
Resolution: Duplicate

> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1114)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload$1.call(S3ABlockOutputStream.java:501)
> at 
> 

[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866256#comment-15866256
 ] 

Steve Loughran commented on HADOOP-14071:
-

can we move discussion to HADOOP-14208, and I resolve this as a dupe? Keep 
discussion in one place

> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
> at 
> com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
> at org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1114)
> at 
> 

[jira] [Commented] (HADOOP-13904) DynamoDBMetadataStore to handle DDB throttling failures through retry policy

2017-02-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866260#comment-15866260
 ] 

Sean Mackrory commented on HADOOP-13904:


{quote}This patch essentially keeps existing exception behavior but just slows 
down batch work resubmittal. So I think it is an improvement, but we may have 
to add a higher-level retry loop for the ProvisionedThroughputExceededException 
case. Why they don't just return all items as unprocessed is beyond me.{quote}

I'm of the opinion that we should be catching that one. It seems required to 
reasonably and correctly handle the behavior as documented, even though we 
haven't seen that specific edge case. Everything else sounds good to me...

> DynamoDBMetadataStore to handle DDB throttling failures through retry policy
> 
>
> Key: HADOOP-13904
> URL: https://issues.apache.org/jira/browse/HADOOP-13904
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13904-HADOOP-13345.001.patch, 
> HADOOP-13904-HADOOP-13345.002.patch
>
>
> When you overload DDB, you get error messages warning of throttling, [as 
> documented by 
> AWS|http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes]
> Reduce load on DDB by doing a table lookup before the create, then, in table 
> create/delete operations and in get/put actions, recognise the error codes 
> and retry using an appropriate retry policy (exponential backoff + ultimate 
> failure) 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866225#comment-15866225
 ] 

Xiaoyu Yao commented on HADOOP-13924:
-

Thanks [~ajisakaa] for working on this. The change looks good to me. 
One question: does the newer check style version already contains rule to cover 
length exception for package and import?

> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14058) Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks

2017-02-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866150#comment-15866150
 ] 

Hudson commented on HADOOP-14058:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11245 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11245/])
HADOOP-14058. Fix (aajisaka: rev b9f8491252f5a23a91a1d695d748556a0fd803ae)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/NativeS3FileSystemContractBaseTest.java


> Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks
> ---
>
> Key: HADOOP-14058
> URL: https://issues.apache.org/jira/browse/HADOOP-14058
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: s3
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14058.001.patch, 
> HADOOP-14058-HADOOP-13345.001.patch
>
>
> In NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks, 
> {code}
>   else if (i == 3) {
> // test both markers
> store.storeEmptyFile(base + "_$folder$");
> store.storeEmptyFile(base + "/dir_$folder$");
> store.storeEmptyFile(base + "/");
> store.storeEmptyFile(base + "/dir/");
>   }
> {code}
> the above test code is not executed. In the following code:
> {code}
> for (int i = 0; i < 3; i++) {
> {code}
> < should be <=.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14041) CLI command to prune old metadata

2017-02-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14041:
---
Status: Patch Available  (was: Open)

> CLI command to prune old metadata
> -
>
> Key: HADOOP-14041
> URL: https://issues.apache.org/jira/browse/HADOOP-14041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14041-HADOOP-13345.001.patch, 
> HADOOP-14041-HADOOP-13345.002.patch, HADOOP-14041-HADOOP-13345.003.patch, 
> HADOOP-14041-HADOOP-13345.004.patch
>
>
> Add a CLI command that allows users to specify an age at which to prune 
> metadata that hasn't been modified for an extended period of time. Since the 
> primary use-case targeted at the moment is list consistency, it would make 
> sense (especially when authoritative=false) to prune metadata that is 
> expected to have become consistent a long time ago.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14040) Use shaded aws-sdk uber-JAR

2017-02-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866124#comment-15866124
 ] 

Sean Mackrory commented on HADOOP-14040:


I'd like to +1 this - I think we should note that deprecation warnings and fix 
them soon but not necessarily before upgrading to this. I tried it on the s3 
branch last week in various US regions and had no problems - and I think all 
else being equal it would be better to upgrade. Some of the DynamoDB APIs 
S3Guard is using were added relatively recently, and I think it's quite likely 
there are some issues that have been fixed in more recent SDKs.

> Use shaded aws-sdk uber-JAR
> ---
>
> Key: HADOOP-14040
> URL: https://issues.apache.org/jira/browse/HADOOP-14040
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14040-branch-2-001.patch
>
>
> AWS SDK now has a (v. large) uberjar shading all dependencies
> This ensures that AWS dependency changes (e.g json) don't cause problems 
> downstream in things like HBase, so enabling backporting if desired.
> This will let us addess the org.json don't be evil problem: this SDK version 
> doesn't have those files.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13826) S3A Deadlock in multipart copy due to thread pool limits.

2017-02-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866130#comment-15866130
 ] 

Sean Mackrory commented on HADOOP-13826:


Just pinging - it would be nice for this to get settled soon.

> S3A Deadlock in multipart copy due to thread pool limits.
> -
>
> Key: HADOOP-13826
> URL: https://issues.apache.org/jira/browse/HADOOP-13826
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-13826.001.patch, HADOOP-13826.002.patch, 
> HADOOP-13826.003.patch, HADOOP-13826.004.patch
>
>
> In testing HIVE-15093 we have encountered deadlocks in the s3a connector. The 
> TransferManager javadocs 
> (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html)
>  explain how this is possible:
> {quote}It is not recommended to use a single threaded executor or a thread 
> pool with a bounded work queue as control tasks may submit subtasks that 
> can't complete until all sub tasks complete. Using an incorrectly configured 
> thread pool may cause a deadlock (I.E. the work queue is filled with control 
> tasks that can't finish until subtasks complete but subtasks can't execute 
> because the queue is filled).{quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13868) New defaults for S3A multi-part configuration

2017-02-14 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866128#comment-15866128
 ] 

Sean Mackrory commented on HADOOP-13868:


Just pinging on this - I'd like to resolve it soon.

> New defaults for S3A multi-part configuration
> -
>
> Key: HADOOP-13868
> URL: https://issues.apache.org/jira/browse/HADOOP-13868
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.7.0, 3.0.0-alpha1
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-13868.001.patch, HADOOP-13868.002.patch, 
> optimizing-multipart-s3a.sh
>
>
> I've been looking at a big performance regression when writing to S3 from 
> Spark that appears to have been introduced with HADOOP-12891.
> In the Amazon SDK, the default threshold for multi-part copies is 320x the 
> threshold for multi-part uploads (and the block size is 20x bigger), so I 
> don't think it's necessarily wise for us to have them be the same.
> I did some quick tests and it seems to me the sweet spot when multi-part 
> copies start being faster is around 512MB. It wasn't as significant, but 
> using 104857600 (Amazon's default) for the blocksize was also slightly better.
> I propose we do the following, although they're independent decisions:
> (1) Split the configuration. Ideally, I'd like to have 
> fs.s3a.multipart.copy.threshold and fs.s3a.multipart.upload.threshold (and 
> corresponding properties for the block size). But then there's the question 
> of what to do with the existing fs.s3a.multipart.* properties. Deprecation? 
> Leave it as a short-hand for configuring both (that's overridden by the more 
> specific properties?).
> (2) Consider increasing the default values. In my tests, 256 MB seemed to be 
> where multipart uploads came into their own, and 512 MB was where multipart 
> copies started outperforming the alternative. Would be interested to hear 
> what other people have seen.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14058) Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks

2017-02-14 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14058:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.8.1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.8. Thanks [~linyiqun] for the 
contribution and thanks [~ste...@apache.org] for the comments.

> Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks
> ---
>
> Key: HADOOP-14058
> URL: https://issues.apache.org/jira/browse/HADOOP-14058
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: s3
> Fix For: 2.9.0, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-14058.001.patch, 
> HADOOP-14058-HADOOP-13345.001.patch
>
>
> In NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks, 
> {code}
>   else if (i == 3) {
> // test both markers
> store.storeEmptyFile(base + "_$folder$");
> store.storeEmptyFile(base + "/dir_$folder$");
> store.storeEmptyFile(base + "/");
> store.storeEmptyFile(base + "/dir/");
>   }
> {code}
> the above test code is not executed. In the following code:
> {code}
> for (int i = 0; i < 3; i++) {
> {code}
> < should be <=.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14041) CLI command to prune old metadata

2017-02-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14041:
---
Attachment: HADOOP-14041-HADOOP-13345.004.patch

{quote}Minor nit: I would make sleep happen between batches (not before the 
first).{quote}

I went with this because the first batch immediately follows a large request, 
so it would be appropriate to pause for a breath anyway.

The 25ms delay is entirely arbitrary - if other folks have opinions on a better 
default, I'd love to have some reasoning for what we go with. If anything I 
suspect it should probably be longer.

Attaching a patch that address all other feedback from [~fabbri]

> CLI command to prune old metadata
> -
>
> Key: HADOOP-14041
> URL: https://issues.apache.org/jira/browse/HADOOP-14041
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14041-HADOOP-13345.001.patch, 
> HADOOP-14041-HADOOP-13345.002.patch, HADOOP-14041-HADOOP-13345.003.patch, 
> HADOOP-14041-HADOOP-13345.004.patch
>
>
> Add a CLI command that allows users to specify an age at which to prune 
> metadata that hasn't been modified for an extended period of time. Since the 
> primary use-case targeted at the moment is list consistency, it would make 
> sense (especially when authoritative=false) to prune metadata that is 
> expected to have become consistent a long time ago.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14068) Add integration test version of TestMetadataStore for DynamoDB

2017-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15866021#comment-15866021
 ] 

Hadoop QA commented on HADOOP-14068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HADOOP-14068 does not apply to HADOOP-13345. Rebase required? 
Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14068 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852596/HADOOP-14068-HADOOP-13345.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11623/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add integration test version of TestMetadataStore for DynamoDB
> --
>
> Key: HADOOP-14068
> URL: https://issues.apache.org/jira/browse/HADOOP-14068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14068-HADOOP-13345.001.patch, 
> HADOOP-14068-HADOOP-13345.002.patch
>
>
> I tweaked TestDynamoDBMetadataStore to run against the actual Amazon DynamoDB 
> service (as opposed to the "local" edition). Several tests failed because of 
> minor variations in behavior. I think the differences that are clearly 
> possible are enough to warrant extending that class as an ITest (but 
> obviously keeping the existing test so 99% of the the coverage remains even 
> when not configured for actual DynamoDB usage).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14068) Add integration test version of TestMetadataStore for DynamoDB

2017-02-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14068:
---
Attachment: HADOOP-14068-HADOOP-13345.002.patch

I think I've addressed everything I was unhappy with in the previous patch: 
mainly the hard-coded bucket, etc. The reason the 2 verifyTable* tests were 
failing previously for me turned out to be because of that: if you've 
configured S3Guard to use a permanent table rather than naming it after the 
bucket you're also using for tests, those tests would make assumptions that are 
no longer true. So I think it's good that the tests are inheriting more from 
the configuration after this change.

> Add integration test version of TestMetadataStore for DynamoDB
> --
>
> Key: HADOOP-14068
> URL: https://issues.apache.org/jira/browse/HADOOP-14068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14068-HADOOP-13345.001.patch, 
> HADOOP-14068-HADOOP-13345.002.patch
>
>
> I tweaked TestDynamoDBMetadataStore to run against the actual Amazon DynamoDB 
> service (as opposed to the "local" edition). Several tests failed because of 
> minor variations in behavior. I think the differences that are clearly 
> possible are enough to warrant extending that class as an ITest (but 
> obviously keeping the existing test so 99% of the the coverage remains even 
> when not configured for actual DynamoDB usage).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14068) Add integration test version of TestMetadataStore for DynamoDB

2017-02-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14068:
---
Status: Patch Available  (was: Open)

> Add integration test version of TestMetadataStore for DynamoDB
> --
>
> Key: HADOOP-14068
> URL: https://issues.apache.org/jira/browse/HADOOP-14068
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14068-HADOOP-13345.001.patch, 
> HADOOP-14068-HADOOP-13345.002.patch
>
>
> I tweaked TestDynamoDBMetadataStore to run against the actual Amazon DynamoDB 
> service (as opposed to the "local" edition). Several tests failed because of 
> minor variations in behavior. I think the differences that are clearly 
> possible are enough to warrant extending that class as an ITest (but 
> obviously keeping the existing test so 99% of the the coverage remains even 
> when not configured for actual DynamoDB usage).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865940#comment-15865940
 ] 

Ted Yu commented on HADOOP-14076:
-

Do you have suggestion where the helper method should reside.

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865739#comment-15865739
 ] 

Steve Loughran commented on HADOOP-14076:
-

this is a very special case. Why not some helper java method 
{{saveConfig(string path, Configuration}} somewhere?

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14071) S3a: Failed to reset the request input stream

2017-02-14 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865735#comment-15865735
 ] 

Steve Loughran commented on HADOOP-14071:
-

OK. I'm redoing the HADOOP-14028 patch with the File ref being passed down to 
AWS. Due to some technical issues ("laptop is toast") my dev time is somewhat 
crippled this week, so I'm not going to give a schedule for that being 
available. Hopefully in the next day or two —I just haven't got hadoop building 
locally right now

> S3a: Failed to reset the request input stream
> -
>
> Key: HADOOP-14071
> URL: https://issues.apache.org/jira/browse/HADOOP-14071
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>
> When using the patch from HADOOP-14028, I fairly consistently get {{Failed to 
> reset the request input stream}} exceptions. They're more likely to occur the 
> larger the file that's being written (70GB in the extreme case, but it needs 
> to be one file).
> {code}
> 2017-02-10 04:21:43 WARN S3ABlockOutputStream:692 - Transfer failure of block 
> FileBlock{index=416, 
> destFile=/tmp/hadoop-root/s3a/s3ablock-0416-4228067786955989475.tmp, 
> state=Upload, dataSize=11591473, limit=104857600}
> 2017-02-10 04:21:43 WARN S3AInstrumentation:777 - Closing output stream 
> statistics while data is still marked as pending upload in 
> OutputStreamStatistics{blocksSubmitted=416, blocksInQueue=0, blocksActive=0, 
> blockUploadsCompleted=416, blockUploadsFailed=3, 
> bytesPendingUpload=209747761, bytesUploaded=43317747712, blocksAllocated=416, 
> blocksReleased=416, blocksActivelyAllocated=0, 
> exceptionsInMultipartFinalize=0, transferDuration=1389936 ms, 
> queueDuration=519 ms, averageQueueTime=1 ms, totalUploadDuration=1390455 ms, 
> effectiveBandwidth=3.1153649497466657E7 bytes/s}
> at org.apache.hadoop.fs.s3a.S3AUtils.extractException(S3AUtils.java:200)
> at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:128)
> Exception in thread "main" org.apache.hadoop.fs.s3a.AWSClientIOException: 
> Multi-part upload with id 
> 'Xx.ezqT5hWrY1W92GrcodCip88i8rkJiOcom2nuUAqHtb6aQX__26FYh5uYWKlRNX5vY5ktdmQWlOovsbR8CLmxUVmwFkISXxDRHeor8iH9nPhI3OkNbWJJBLrvB3xLUuLX0zvGZWo7bUrAKB6IGxA--'
>  to 2017/planet-170206.orc on 2017/planet-170206.orc: 
> com.amazonaws.ResetException: Failed to reset the request input stream; If 
> the request involves an input stream, the maximum stream buffer size can be 
> configured via request.getRequestClientOptions().setReadLimit(int): Failed to 
> reset the request input stream; If the request involves an input stream, the 
> maximum stream buffer size can be configured via 
> request.getRequestClientOptions().setReadLimit(int)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.waitForAllPartUploads(S3ABlockOutputStream.java:539)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream$MultiPartUpload.access$100(S3ABlockOutputStream.java:456)
> at 
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:351)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
> at org.apache.orc.impl.PhysicalFsWriter.close(PhysicalFsWriter.java:221)
> at org.apache.orc.impl.WriterImpl.close(WriterImpl.java:2827)
> at net.mojodna.osm2orc.standalone.OsmPbf2Orc.convert(OsmPbf2Orc.java:296)
> at net.mojodna.osm2orc.Osm2Orc.main(Osm2Orc.java:47)
> Caused by: com.amazonaws.ResetException: Failed to reset the request input 
> stream; If the request involves an input stream, the maximum stream buffer 
> size can be configured via request.getRequestClientOptions().setReadLimit(int)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
> at 
> org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
> at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
> at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
> at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
> at 
> 

[jira] [Updated] (HADOOP-14028) S3A block output streams don't delete temporary files in multipart uploads

2017-02-14 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14028:

Status: Open  (was: Patch Available)

> S3A block output streams don't delete temporary files in multipart uploads
> --
>
> Key: HADOOP-14028
> URL: https://issues.apache.org/jira/browse/HADOOP-14028
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: JDK 8 + ORC 1.3.0 + hadoop-aws 3.0.0-alpha2
>Reporter: Seth Fitzsimmons
>Assignee: Steve Loughran
>Priority: Critical
> Attachments: HADOOP-14028-branch-2-001.patch, 
> HADOOP-14028-branch-2.8-002.patch, HADOOP-14028-branch-2.8-003.patch, 
> HADOOP-14028-branch-2.8-004.patch
>
>
> I have `fs.s3a.fast.upload` enabled with 3.0.0-alpha2 (it's exactly what I 
> was looking for after running into the same OOM problems) and don't see it 
> cleaning up the disk-cached blocks.
> I'm generating a ~50GB file on an instance with ~6GB free when the process 
> starts. My expectation is that local copies of the blocks would be deleted 
> after those parts finish uploading, but I'm seeing more than 15 blocks in 
> /tmp (and none of them have been deleted thus far).
> I see that DiskBlock deletes temporary files when closed, but is it closed 
> after individual blocks have finished uploading or when the entire file has 
> been fully written to the FS (full upload completed, including all parts)?
> As a temporary workaround to avoid running out of space, I'm listing files, 
> sorting by atime, and deleting anything older than the first 20: `ls -ut | 
> tail -n +21 | xargs rm`
> Steve Loughran says:
> > They should be deleted as soon as the upload completes; the close() call 
> > that the AWS httpclient makes on the input stream triggers the deletion. 
> > Though there aren't tests for it, as I recall.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865485#comment-15865485
 ] 

Hadoop QA commented on HADOOP-13945:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 34s{color} | {color:orange} root: The patch generated 6 new + 48 unchanged - 
0 fixed = 54 total (was 48) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 14s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
41s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13945 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852523/HADOOP-13945.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux ac02e5dd1129 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71c23c9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11622/artifact/patchprocess/diff-checkstyle-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11622/artifact/patchprocess/patch-javadoc-hadoop-tools_hadoop-azure.txt
 |
| unit | 

[jira] [Commented] (HADOOP-14008) Upgrade to Apache Yetus 0.4.0

2017-02-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865469#comment-15865469
 ] 

Akira Ajisaka commented on HADOOP-14008:


Hi [~aw], how is this issue going on? Yetus 0.4.0 has been released, so we can 
upgrade it.

> Upgrade to Apache Yetus 0.4.0
> -
>
> Key: HADOOP-14008
> URL: https://issues.apache.org/jira/browse/HADOOP-14008
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> 0.4.0 will be out RSN. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14081) S3A: Consider avoiding array copy in S3ABlockOutputStream (ByteArrayBlock)

2017-02-14 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-14081:
-

 Summary: S3A: Consider avoiding array copy in S3ABlockOutputStream 
(ByteArrayBlock)
 Key: HADOOP-14081
 URL: https://issues.apache.org/jira/browse/HADOOP-14081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Rajesh Balamohan
Priority: Minor


In {{S3ADataBlocks::ByteArrayBlock}}, data is copied whenever {{startUpload}} 
is called. It might be possible to directly access the byte[] array from 
ByteArrayOutputStream. 

Might have to extend ByteArrayOutputStream and create a method like 
getInputStream() which can return ByteArrayInputStream.  This would avoid 
expensive array copy during large upload.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14081) S3A: Consider avoiding array copy in S3ABlockOutputStream (ByteArrayBlock)

2017-02-14 Thread Rajesh Balamohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HADOOP-14081:
--
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-13204

> S3A: Consider avoiding array copy in S3ABlockOutputStream (ByteArrayBlock)
> --
>
> Key: HADOOP-14081
> URL: https://issues.apache.org/jira/browse/HADOOP-14081
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In {{S3ADataBlocks::ByteArrayBlock}}, data is copied whenever {{startUpload}} 
> is called. It might be possible to directly access the byte[] array from 
> ByteArrayOutputStream. 
> Might have to extend ByteArrayOutputStream and create a method like 
> getInputStream() which can return ByteArrayInputStream.  This would avoid 
> expensive array copy during large upload.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14058) Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks

2017-02-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865455#comment-15865455
 ] 

Akira Ajisaka commented on HADOOP-14058:


+1, ran the subclasses of NativeS3FileSystemContract 
(ITestInMemoryNativeS3FileSystemContract and 
ITestJets3tNativeS3FileSystemContract) and they passed.

> Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks
> ---
>
> Key: HADOOP-14058
> URL: https://issues.apache.org/jira/browse/HADOOP-14058
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: s3
> Attachments: HADOOP-14058.001.patch, 
> HADOOP-14058-HADOOP-13345.001.patch
>
>
> In NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks, 
> {code}
>   else if (i == 3) {
> // test both markers
> store.storeEmptyFile(base + "_$folder$");
> store.storeEmptyFile(base + "/dir_$folder$");
> store.storeEmptyFile(base + "/");
> store.storeEmptyFile(base + "/dir/");
>   }
> {code}
> the above test code is not executed. In the following code:
> {code}
> for (int i = 0; i < 3; i++) {
> {code}
> < should be <=.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865353#comment-15865353
 ] 

Hadoop QA commented on HADOOP-13924:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  5s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13924 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852508/HADOOP-13924.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux f6fb5487d943 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71c23c9 |
| Default Java | 1.8.0_121 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11621/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11621/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11621/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-build-tools . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11621/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
>   

[jira] [Updated] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-02-14 Thread Santhosh G Nayak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Santhosh G Nayak updated HADOOP-13945:
--
Attachment: HADOOP-13945.4.patch

Thanks [~liuml07], I have fixed the unit test 
{{hadoop.conf.TestCommonConfigurationFields}} failure in the new patch.

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.1.patch, HADOOP-13945.2.patch, 
> HADOOP-13945.3.patch, HADOOP-13945.4.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org