[jira] [Updated] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13228:
---
Attachment: HADOOP-13228.02.patch

Thanks again Andrew for the quick response.

Patch 2 fixes checkstyle, and adds a new dummy class to verify the DT on 
request header. (I meant to say inject :) )

I noticed TestWebDelegationToken can be cleaned up in various ways
- no test timeout
- every test method has {{final Server jetty = createJettyServer();}} at the 
beginning, and {{jetty.stop();}} in the end, which is exactly what the test's 
@Before and @After is doing...
- could use {{GenericTestUtil#assertExceptionContains}} for exception message 
assertion


But I want to keep this jira small and focused, so will file a new jira for 
test clean up if you agree. The new test I added is clean from the above.

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch, HADOOP-13228.02.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13229) Document missing properties in core-default.xml

2016-05-31 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-13229:
---

 Summary: Document missing properties in core-default.xml
 Key: HADOOP-13229
 URL: https://issues.apache.org/jira/browse/HADOOP-13229
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ray Chiang
Assignee: Ray Chiang


There are 60 properties not currently defined in core-default.xml. These 
properties should either be

A) documented in core-default.xml OR
B) listed as an exception (with comments, e.g. for internal use) in the 
TestCommonConfigurationFields unit test




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13211) Swift driver should have a configurable retry feature when ecounter 5xx error

2016-05-31 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309263#comment-15309263
 ] 

Chen He commented on HADOOP-13211:
--

Thank you for the reply, [~ste...@apache.org]. 

IMHO, the hadoop openstack driver is a bridge between HDFS and Openstack object 
store. MR or other native Hadoop frameworks should be able to utilize the 
Hadoop IPC retry. With the increasing popularity of HDFS, other computing 
frameworks like Spark, in memory storage system like Tachyon, they are using 
hadoop openstack driver. I am not sure if Spark or other frameworks use 
hadoop-openstack driver, the Hadoop IPC retry will trigger or not. 

Those frameworks have retry on task level, however, it could be costly to retry 
a task than just retry in the driver level. 

For the data lose, it is a really good catch. If the server keeps failing and 
providing 5xx, the upload will finally fail. The object store is not file 
system and may not guarantee file system level integrity. I can't figure out a 
scenario that data loss caused by retry. Could you provide an suggestion? 

> Swift driver should have a configurable retry feature when ecounter 5xx error
> -
>
> Key: HADOOP-13211
> URL: https://issues.apache.org/jira/browse/HADOOP-13211
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/swift
>Affects Versions: 2.7.2
>Reporter: Chen He
>Assignee: Chen He
>
> In current code. if Swift driver meets a HTTP 5xx, it will throw exception 
> and stop. As a driver, it will be more sophisticate if it can retry a 
> configurable times before report failure. There are two reasons that I can 
> image:
> 1. if the server is really busy, it is possible that the server will drop 
> some requests to avoid DDoS attack.
> 2. If server accidentally unavailable for a short period of time and come 
> back again, we may not need to fail the whole driver. Just record the 
> exception and retry may be more flexible. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13219) NameNode Rpc Reader Thread crash, and cluster hang.

2016-05-31 Thread ChenFolin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309202#comment-15309202
 ] 

ChenFolin commented on HADOOP-13219:


Hi,

Listener thread wait at Reader#addConnection(Connection 
conn)#pendingConnections.put(conn)

All handler thread wait at callQueue.take()


> NameNode Rpc Reader Thread crash, and cluster hang.
> ---
>
> Key: HADOOP-13219
> URL: https://issues.apache.org/jira/browse/HADOOP-13219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Affects Versions: 2.5.0, 2.6.0, 2.8.0, 2.7.2, 2.6.2, 2.6.4
>Reporter: ChenFolin
>  Labels: patch
> Attachments: HADOOP-13219-3.patch, HDFS-10472-2.patch, 
> HDFS-10472.patch
>
>
> My Cluster hang yesterday .
> Becuase the rpc server Reader threads crash. So all rpc request  timeout, 
> include datanode hearbeat &.
> We can see , the method doRunLoop just catch InterruptedException and 
> IOException:
> while (running) {
>   SelectionKey key = null;
>   try {
> // consume as many connections as currently queued to avoid
> // unbridled acceptance of connections that starves the select
> int size = pendingConnections.size();
> for (int i=size; i>0; i--) {
>   Connection conn = pendingConnections.take();
>   conn.channel.register(readSelector, SelectionKey.OP_READ, conn);
> }
> readSelector.select();
> Iterator iter = 
> readSelector.selectedKeys().iterator();
> while (iter.hasNext()) {
>   key = iter.next();
>   iter.remove();
>   if (key.isValid()) {
> if (key.isReadable()) {
>   doRead(key);
> }
>   }
>   key = null;
> }
>   } catch (InterruptedException e) {
> if (running) {  // unexpected -- log it
>   LOG.info(Thread.currentThread().getName() + " unexpectedly 
> interrupted", e);
> }
>   } catch (IOException ex) {
> LOG.error("Error in Reader", ex);
>   } 
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13219) NameNode Rpc Reader Thread crash, and cluster hang.

2016-05-31 Thread ChenFolin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309193#comment-15309193
 ] 

ChenFolin commented on HADOOP-13219:


Hi,
 
Because not set UncaughtExceptionHandler for Reader thread, so i do not know 
what kind of exception.

I see there was no Reader Thread at NameNode jvm stack , and Namenode gc log :

[ParNew: 19752861K->88868K(22118400K), 0.1060980 secs] 
89096996K->69442056K(128614400K), 0.1062910 secs] [Times: user=3.72 sys=0.00, 
real=0.10 secs] 

So i do not think it is a OutOfMemoryError.

Thanks.

> NameNode Rpc Reader Thread crash, and cluster hang.
> ---
>
> Key: HADOOP-13219
> URL: https://issues.apache.org/jira/browse/HADOOP-13219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Affects Versions: 2.5.0, 2.6.0, 2.8.0, 2.7.2, 2.6.2, 2.6.4
>Reporter: ChenFolin
>  Labels: patch
> Attachments: HADOOP-13219-3.patch, HDFS-10472-2.patch, 
> HDFS-10472.patch
>
>
> My Cluster hang yesterday .
> Becuase the rpc server Reader threads crash. So all rpc request  timeout, 
> include datanode hearbeat &.
> We can see , the method doRunLoop just catch InterruptedException and 
> IOException:
> while (running) {
>   SelectionKey key = null;
>   try {
> // consume as many connections as currently queued to avoid
> // unbridled acceptance of connections that starves the select
> int size = pendingConnections.size();
> for (int i=size; i>0; i--) {
>   Connection conn = pendingConnections.take();
>   conn.channel.register(readSelector, SelectionKey.OP_READ, conn);
> }
> readSelector.select();
> Iterator iter = 
> readSelector.selectedKeys().iterator();
> while (iter.hasNext()) {
>   key = iter.next();
>   iter.remove();
>   if (key.isValid()) {
> if (key.isReadable()) {
>   doRead(key);
> }
>   }
>   key = null;
> }
>   } catch (InterruptedException e) {
> if (running) {  // unexpected -- log it
>   LOG.info(Thread.currentThread().getName() + " unexpectedly 
> interrupted", e);
> }
>   } catch (IOException ex) {
> LOG.error("Error in Reader", ex);
>   } 
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309189#comment-15309189
 ] 

Andrew Wang commented on HADOOP-13228:
--

LGTM overall, this change looks nice and tight. Few minor comments about the 
test:

* I agree with you that the static is a bit gross. A new dummy auth handler 
looks like a small amount of code, so would prefer that.
* Some of the new imports are unused
* Is "stabbed" the right word? I think you might mean "added" or "injected" or 
something.

+1 pending though, thanks for working on this Xiao!

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309086#comment-15309086
 ] 

Hadoop QA commented on HADOOP-13228:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 10s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
3 new + 112 unchanged - 0 fixed = 115 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 8s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 1s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807297/HADOOP-13228.01.patch 
|
| JIRA Issue | HADOOP-13228 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 22cffb98e4f9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ceb06e |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9633/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9633/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9633/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9633/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9633/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically 

[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309022#comment-15309022
 ] 

Xiao Chen commented on HADOOP-13155:


Thanks [~aw] for the suggestion.
Given that {{DtUtilShell}} invokes the interface of 
{{org.apache.hadoop.security.token.Token#renew}} in 
{{DtFileOperations#renewTokenFile}} (and cancel in the same way), the current 
test in {{TestKMS}} covers it by calling the same interface. 

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-31 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-13137:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

> TraceAdmin should support Kerberized cluster
> 
>
> Key: HADOOP-13137
> URL: https://issues.apache.org/jira/browse/HADOOP-13137
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.6.0, 3.0.0-alpha1
> Environment: CDH5.5.1 cluster with Kerberos
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Kerberos
> Fix For: 2.8.0
>
> Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, 
> HADOOP-13137.003.patch, HADOOP-13137.004.patch, HADOOP-13137.005.patch
>
>
> When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with 
> the following error:
> [hdfs@weichiu-encryption-1 root]$ hadoop trace -list  -host 
> weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: 
> Exception encountered while connecting to the server : 
> java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
> principal name
> 16/05/12 00:02:13 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) 
> cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
> Exception in thread "main" java.io.IOException: Failed on local exception: 
> java.io.IOException: java.lang.IllegalArgumentException: Failed to specify 
> server's Kerberos principal name; Host Details : local host is: 
> "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: 
> "weichiu-encryption-1.vpc.cloudera.com":8022;
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1470)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>   at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source)
>   at 
> org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58)
>   at 
> org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68)
>   at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177)
>   at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195)
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733)
>   at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1442)
>   ... 7 more
> Caused by: java.lang.IllegalArgumentException: Failed to specify server's 
> Kerberos principal name
>   at 
> org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
>   at 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
>   at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
>   at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
>   at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
>   ... 10 more
> It is failing because {{TraceAdmin}} does not set up the property 
> {{CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY}}
> Fixing it may require some restructuring, as the NameNode principal 
> 

[jira] [Comment Edited] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309009#comment-15309009
 ] 

Xiao Chen edited comment on HADOOP-13228 at 6/1/16 1:20 AM:


Fix:
As talked with [~andrew.wang], given that the querystring is deprecated, we 
don't need to support it in newly added functionalities. Hence, I simply put up 
the fix to always put the DT to the request header, when conducting the 3 
(get/renew/cancel) DT ops. The fix here is in {{DelegationTokenAuthenticator}} 
because that's where the connection is created.

Test:
- Seems to me {{TestWebDelegationToken}} is the best place to test this. 
(HADOOP-13155 will also test this from an end-to-end POV.)
- {{TestWebDelegationToken}} currently creates a bunch of fake classes to test. 
To keep the change minimal, I added a new test for using DT, and added the 
verification logic to the fake server classes.
- Existing tests pass because 1) when authToken is valid, no DT logic is 
triggered. 2) When there's no DT, they fall back to the underlying auth 
handler, which is again faked.
- I added a {{verifyHeader}} flag to control whether to check the request 
header or not. This is because if we have an auth token, we don't care about DT 
anymore. (So all existing tests don't need to verify header). If this is not 
acceptable, I think we can also create a new DTAuthHandler stab for verifying 
this.
- Added a log in DTAuthHandler, which I think is super helpful for debugging 
this.


was (Author: xiaochen):
Fix:
As talked with [~andrew.wang], given that the querystring is deprecated, we 
don't need to support it in newly added functionalities. Hence, I simply put up 
the fix to always put the DT to the request header, when conducting the 3 
(get/renew/cancel) DT ops. The fix here is in {{DelegationTokenAuthenticator}} 
because that's where the connection is created.

Test:
- Seems to me {{TestWebDelegationToken}} is the best place to test this. 
(HADOOP-13155 will also test this from an end-to-end POV.
- {{TestWebDelegationToken}} currently creates a bunch of fake classes to test. 
To keep the change minimal, I added a new test for using DT, and added the 
verification logic to the fake server classes.
- Existing tests pass because when there's no DT, they fall back to the 
underlying auth handler, which is again faked.
- I added a {{verifyHeader}} flag to control whether to check the request 
header or not. This is because if we have an auth token, we don't care about DT 
anymore. (So all existing tests don't need to verify header). If this is not 
acceptable, I think we can also create a new DTAuthHandler stab for verifying 
this.
- Added a log in DTAuthHandler, which I think is super helpful for debugging 
this.

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13228:
---
Attachment: HADOOP-13228.01.patch

Fix:
As talked with [~andrew.wang], given that the querystring is deprecated, we 
don't need to support it in newly added functionalities. Hence, I simply put up 
the fix to always put the DT to the request header, when conducting the 3 
(get/renew/cancel) DT ops. The fix here is in {{DelegationTokenAuthenticator}} 
because that's where the connection is created.

Test:
- Seems to me {{TestWebDelegationToken}} is the best place to test this. 
(HADOOP-13155 will also test this from an end-to-end POV.
- {{TestWebDelegationToken}} currently creates a bunch of fake classes to test. 
To keep the change minimal, I added a new test for using DT, and added the 
verification logic to the fake server classes.
- Existing tests pass because when there's no DT, they fall back to the 
underlying auth handler, which is again faked.
- I added a {{verifyHeader}} flag to control whether to check the request 
header or not. This is because if we have an auth token, we don't care about DT 
anymore. (So all existing tests don't need to verify header). If this is not 
acceptable, I think we can also create a new DTAuthHandler stab for verifying 
this.
- Added a log in DTAuthHandler, which I think is super helpful for debugging 
this.

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13228:
---
Status: Patch Available  (was: Open)

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13228.01.patch
>
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309006#comment-15309006
 ] 

Hudson commented on HADOOP-13137:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9891 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9891/])
HADOOP-13137. TraceAdmin should support Kerberized cluster (Wei-Chiu (cmccabe: 
rev 8ceb06e2392763726210f96bb1c176e6a9fe7b53)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tracing/TestTraceAdmin.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceAdmin.java
* hadoop-common-project/hadoop-common/src/site/markdown/Tracing.md


> TraceAdmin should support Kerberized cluster
> 
>
> Key: HADOOP-13137
> URL: https://issues.apache.org/jira/browse/HADOOP-13137
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.6.0, 3.0.0-alpha1
> Environment: CDH5.5.1 cluster with Kerberos
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Kerberos
> Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, 
> HADOOP-13137.003.patch, HADOOP-13137.004.patch, HADOOP-13137.005.patch
>
>
> When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with 
> the following error:
> [hdfs@weichiu-encryption-1 root]$ hadoop trace -list  -host 
> weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: 
> Exception encountered while connecting to the server : 
> java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
> principal name
> 16/05/12 00:02:13 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) 
> cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
> Exception in thread "main" java.io.IOException: Failed on local exception: 
> java.io.IOException: java.lang.IllegalArgumentException: Failed to specify 
> server's Kerberos principal name; Host Details : local host is: 
> "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: 
> "weichiu-encryption-1.vpc.cloudera.com":8022;
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1470)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>   at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source)
>   at 
> org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58)
>   at 
> org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68)
>   at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177)
>   at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195)
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733)
>   at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1442)
>   ... 7 more
> Caused by: java.lang.IllegalArgumentException: Failed to specify server's 
> Kerberos principal name
>   at 
> org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
>   at 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
>   at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
>   at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
>   at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   

[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309003#comment-15309003
 ] 

Allen Wittenauer commented on HADOOP-13155:
---

We should make sure this works with hadoop dtutil.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13137) TraceAdmin should support Kerberized cluster

2016-05-31 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308999#comment-15308999
 ] 

Colin Patrick McCabe commented on HADOOP-13137:
---

bq. The test failures look unrelated.

I agree-- I ran them locally, and they passed.

Thanks, [~jojochuang] and [~steve_l].  +1.

> TraceAdmin should support Kerberized cluster
> 
>
> Key: HADOOP-13137
> URL: https://issues.apache.org/jira/browse/HADOOP-13137
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tracing
>Affects Versions: 2.6.0, 3.0.0-alpha1
> Environment: CDH5.5.1 cluster with Kerberos
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: Kerberos
> Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, 
> HADOOP-13137.003.patch, HADOOP-13137.004.patch, HADOOP-13137.005.patch
>
>
> When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with 
> the following error:
> [hdfs@weichiu-encryption-1 root]$ hadoop trace -list  -host 
> weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: 
> Exception encountered while connecting to the server : 
> java.lang.IllegalArgumentException: Failed to specify server's Kerberos 
> principal name
> 16/05/12 00:02:13 WARN security.UserGroupInformation: 
> PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) 
> cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
> Exception in thread "main" java.io.IOException: Failed on local exception: 
> java.io.IOException: java.lang.IllegalArgumentException: Failed to specify 
> server's Kerberos principal name; Host Details : local host is: 
> "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: 
> "weichiu-encryption-1.vpc.cloudera.com":8022;
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1470)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1403)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
>   at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source)
>   at 
> org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58)
>   at 
> org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68)
>   at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177)
>   at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195)
> Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to 
> specify server's Kerberos principal name
>   at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733)
>   at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
>   at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1442)
>   ... 7 more
> Caused by: java.lang.IllegalArgumentException: Failed to specify server's 
> Kerberos principal name
>   at 
> org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
>   at 
> org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
>   at 
> org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
>   at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555)
>   at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
>   at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720)
>   ... 10 more
> It is failing because {{TraceAdmin}} does not set up the property 
> {{CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY}}
> Fixing it may require some restructuring, as the NameNode principal 
> 

[jira] [Commented] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15309000#comment-15309000
 ] 

Xiao Chen commented on HADOOP-13228:


In {{DelegationTokenAuthenticatedURL#openConnection}}, we put delegation token 
(DT) on the request header or query string (deprecated).
However, in {{getDelegationToken}} / {{renewDelegationToken}} / 
{{cancelDelegationToken}}, we don't have such logic. Without a delegation 
token, the server side's {{DelegationTokenAuthenticationHandler}} falls back to 
authenticate via the AuthenticationHandler.

> Add delegation token to the connection in DelegationTokenAuthenticator
> --
>
> Key: HADOOP-13228
> URL: https://issues.apache.org/jira/browse/HADOOP-13228
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> Following [a comment from another 
> jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
>  create this to specifically handle the delegation token renewal/cancellation 
> bug in {{DelegationTokenAuthenticatedURL}} and 
> {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-31 Thread Jiajia Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308996#comment-15308996
 ] 

Jiajia Li commented on HADOOP-12911:


Thanks Steve for making it clear, I will try to rebuild these projects.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13228) Add delegation token to the connection in DelegationTokenAuthenticator

2016-05-31 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13228:
--

 Summary: Add delegation token to the connection in 
DelegationTokenAuthenticator
 Key: HADOOP-13228
 URL: https://issues.apache.org/jira/browse/HADOOP-13228
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Xiao Chen
Assignee: Xiao Chen


Following [a comment from another 
jira|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15308715=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308715],
 create this to specifically handle the delegation token renewal/cancellation 
bug in {{DelegationTokenAuthenticatedURL}} and {{DelegationTokenAuthenticator}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-
Description: This JIRA is to address [Jing 
comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
 in HADOOP-13226.  (was: This JIRA is to address Jing comments in HADOOP-13226.)

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> This JIRA is to address [Jing 
> comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
>  in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-
Description: This JIRA is to address [Jing's 
comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
 in HADOOP-13226.  (was: This JIRA is to address [Jing 
comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
 in HADOOP-13226.)

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> This JIRA is to address [Jing's 
> comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
>  in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-
Description: This JIRA is to address Jing comments in HADOOP-13226.

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> This JIRA is to address Jing comments in HADOOP-13226.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-
Comment: was deleted

(was: This JIRA is to address [Jing 
comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
 in HADOOP-13226.)

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13227:
-

This JIRA is to address [Jing 
comments|https://issues.apache.org/jira/browse/HADOOP-13226?focusedCommentId=15308630=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15308630]
 in HADOOP-13226.

> AsyncCallHandler should use a event driven architecture to handle async calls
> -
>
> Key: HADOOP-13227
> URL: https://issues.apache.org/jira/browse/HADOOP-13227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13227) AsyncCallHandler should use a event driven architecture to handle async calls

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-13227:


 Summary: AsyncCallHandler should use a event driven architecture 
to handle async calls
 Key: HADOOP-13227
 URL: https://issues.apache.org/jira/browse/HADOOP-13227
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io, ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13226) Support async call retry and failover

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13226:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks Jing for reviewing the patches and the great ideas!

I have committed this.

> Support async call retry and failover
> -
>
> Key: HADOOP-13226
> URL: https://issues.apache.org/jira/browse/HADOOP-13226
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, ipc
>Reporter: Xiaobing Zhou
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.8.0
>
> Attachments: h10433_20160524.patch, h10433_20160525.patch, 
> h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, 
> h10433_20160528c.patch
>
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13146) Refactor RetryInvocationHandler

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13146:
-
Fix Version/s: (was: 2.9.0)
   2.8.0

Merged to 2.8.

> Refactor RetryInvocationHandler
> ---
>
> Key: HADOOP-13146
> URL: https://issues.apache.org/jira/browse/HADOOP-13146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: c13146_20160513.patch, c13146_20160513b.patch, 
> c13146_20160514.patch, c13146_20160514b.patch, c13146_20160516.patch
>
>
> - The exception handling is quite long.  It is better to refactor it to a 
> separated method.
> - The failover logic and synchronization can be moved to a new inner class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13219) NameNode Rpc Reader Thread crash, and cluster hang.

2016-05-31 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308929#comment-15308929
 ] 

Chris Nauroth commented on HADOOP-13219:


Do you happen to know what kind of exception it was that caused the threads to 
crash?

Catching {{Throwable}} can be problematic.  Let's assume it was an 
{{OutOfMemoryError}}.  If there was a failure to allocate memory, and we catch 
the error and proceed, how do we understand what state the process is in 
currently?  What if we made partial updates to in-memory state?  Since 
{{OutOfMemoryError}} can be thrown by nearly anything, we effectively have no 
idea what state we're in at this point.  For the NameNode, the inode tree might 
be in an unusual state, and not reflected back to persistent store in fsimage 
or edit log transactions.

There is already a catch of {{OutOfMemoryError}} at another layer in the RPC 
client.  It's a bit of code I disagree with.  Some of us choose to run the 
NameNode JVM with {{-XX:OnOutOfMemoryError}} set to a command to 
self-terminate.  That's a choice that favors correctness over robustness.

> NameNode Rpc Reader Thread crash, and cluster hang.
> ---
>
> Key: HADOOP-13219
> URL: https://issues.apache.org/jira/browse/HADOOP-13219
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: rpc-server
>Affects Versions: 2.5.0, 2.6.0, 2.8.0, 2.7.2, 2.6.2, 2.6.4
>Reporter: ChenFolin
>  Labels: patch
> Attachments: HADOOP-13219-3.patch, HDFS-10472-2.patch, 
> HDFS-10472.patch
>
>
> My Cluster hang yesterday .
> Becuase the rpc server Reader threads crash. So all rpc request  timeout, 
> include datanode hearbeat &.
> We can see , the method doRunLoop just catch InterruptedException and 
> IOException:
> while (running) {
>   SelectionKey key = null;
>   try {
> // consume as many connections as currently queued to avoid
> // unbridled acceptance of connections that starves the select
> int size = pendingConnections.size();
> for (int i=size; i>0; i--) {
>   Connection conn = pendingConnections.take();
>   conn.channel.register(readSelector, SelectionKey.OP_READ, conn);
> }
> readSelector.select();
> Iterator iter = 
> readSelector.selectedKeys().iterator();
> while (iter.hasNext()) {
>   key = iter.next();
>   iter.remove();
>   if (key.isValid()) {
> if (key.isReadable()) {
>   doRead(key);
> }
>   }
>   key = null;
> }
>   } catch (InterruptedException e) {
> if (running) {  // unexpected -- log it
>   LOG.info(Thread.currentThread().getName() + " unexpectedly 
> interrupted", e);
> }
>   } catch (IOException ex) {
> LOG.error("Error in Reader", ex);
>   } 
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-31 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308927#comment-15308927
 ] 

Xiao Chen commented on HADOOP-13155:


Thanks Andrew for the summary. I'm working on the test for DTAuthenticator, 
it's not straight forward, I'll create a new jira and link here + ping you when 
ready.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13226) Support async call retry and failover

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308905#comment-15308905
 ] 

Hudson commented on HADOOP-13226:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9889 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9889/])
HADOOP-13226 Support async call retry and failover. (szetszwo: rev 
83f2f78c118a7e52aba5104bd97b0acedc96be7b)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestAsyncIPC.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AsyncDistributedFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncHDFSWithHA.java
* hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/AsyncCallHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/CallReturn.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestAsyncDFS.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryInvocationHandler.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/AsyncGet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/HATestUtil.java


> Support async call retry and failover
> -
>
> Key: HADOOP-13226
> URL: https://issues.apache.org/jira/browse/HADOOP-13226
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, ipc
>Reporter: Xiaobing Zhou
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h10433_20160524.patch, h10433_20160525.patch, 
> h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, 
> h10433_20160528c.patch
>
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13226) Support async call retry and failover

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308876#comment-15308876
 ] 

Hadoop QA commented on HADOOP-13226:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} HADOOP-13226 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806846/h10433_20160528c.patch
 |
| JIRA Issue | HADOOP-13226 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9632/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support async call retry and failover
> -
>
> Key: HADOOP-13226
> URL: https://issues.apache.org/jira/browse/HADOOP-13226
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, ipc
>Reporter: Xiaobing Zhou
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h10433_20160524.patch, h10433_20160525.patch, 
> h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, 
> h10433_20160528c.patch
>
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13226) Support async call retry and failover

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-13226:
-
Summary: Support async call retry and failover  (was: Make retry also works 
well for Async DFS)

> Support async call retry and failover
> -
>
> Key: HADOOP-13226
> URL: https://issues.apache.org/jira/browse/HADOOP-13226
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, ipc
>Reporter: Xiaobing Zhou
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h10433_20160524.patch, h10433_20160525.patch, 
> h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, 
> h10433_20160528c.patch
>
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13226) Make retry also works well for Async DFS

2016-05-31 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze moved HDFS-10433 to HADOOP-13226:
-

Component/s: (was: hdfs)
 ipc
 io
Key: HADOOP-13226  (was: HDFS-10433)
Project: Hadoop Common  (was: Hadoop HDFS)

> Make retry also works well for Async DFS
> 
>
> Key: HADOOP-13226
> URL: https://issues.apache.org/jira/browse/HADOOP-13226
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, ipc
>Reporter: Xiaobing Zhou
>Assignee: Tsz Wo Nicholas Sze
> Attachments: h10433_20160524.patch, h10433_20160525.patch, 
> h10433_20160525b.patch, h10433_20160527.patch, h10433_20160528.patch, 
> h10433_20160528c.patch
>
>
> In current Async DFS implementation, file system calls are invoked and 
> returns Future immediately to clients. Clients call Future#get to retrieve 
> final results. Future#get internally invokes a chain of callbacks residing in 
> ClientNamenodeProtocolTranslatorPB, ProtobufRpcEngine and ipc.Client. The 
> callback path bypasses the original retry layer/logic designed for 
> synchronous DFS. This proposes refactoring to make retry also works for Async 
> DFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12421) Add jitter to RetryInvocationHandler

2016-05-31 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12421:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> Add jitter to RetryInvocationHandler
> 
>
> Key: HADOOP-12421
> URL: https://issues.apache.org/jira/browse/HADOOP-12421
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12421-v1.patch, HADOOP-12421-v2.patch, 
> HADOOP-12421-v3.patch, HADOOP-12421-v4.patch, HADOOP-12421-v5.patch
>
>
> Calls to NN can become synchronized across a cluster during NN failover. This 
> leads to a spike in requests until things recover. Making an already tricky 
> time worse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308798#comment-15308798
 ] 

Chris Nauroth commented on HADOOP-13171:


Steve, thank you for patch 012.  Tests are passing for me now against S3 
buckets in US-west-2.  I think this is nearly complete except for the following:
# Pre-commit against 011 reported some new Checkstyle warnings.  If the 
pre-commit run against 012 reports the same thing, then please consider 
cleaning up whatever portion of that makes sense.  The patch is already net 
good for Checkstyle because of fixing a lot of pre-existing warnings, but some 
of those new ones look potentially quick and easy to fix too.
# It looks like we'll need a separate patch to apply to trunk.


> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308715#comment-15308715
 ] 

Andrew Wang commented on HADOOP-13155:
--

I talked with Xiao about this patch offline, here's our notes:

* Setting the DT in the query string is deprecated, and only preserved for 
testing. So we don't need to support that in this new renewer functionality.
* We should consider splitting out that bug fix into a separate JIRA, I promise 
to quickly review and commit since this one is dependent. A unit test would be 
good too.
* We need to keep using the old "dfs..." config key for compatibility, so can't 
just swap to the "hadoop..." config key. I haven't seen a situation where we'd 
want to configure these differently, since people normally only have a single 
KMS instance for the entire cluster. But, compat is compat, so we think a 
setter function in DFSClientUtil will work. One day we will probably want 
per-DFS KMS configuration for cross-cluster distcp, in which case we'd also 
need this.

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308635#comment-15308635
 ] 

Hadoop QA commented on HADOOP-12893:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 46s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 6s 
{color} | {color:red} hadoop-project-dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 23s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807238/HADOOP-12893.008.patch
 |
| JIRA Issue | HADOOP-12893 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux cfd73ce2bbec 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| Default Java | 1.8.0_91 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/patch-mvninstall-hadoop-project.txt
 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/patch-mvninstall-hadoop-project-dist.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9630/testReport/ |
| modules | C: hadoop-project hadoop-project-dist . hadoop-resource-bundle U: . 
|
| Console output | 

[jira] [Commented] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308602#comment-15308602
 ] 

Hadoop QA commented on HADOOP-13214:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 2s {color} | 
{color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 48s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807242/HADOOP-13214.2.patch |
| JIRA Issue | HADOOP-13214 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 19d64bc339e6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9631/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9631/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9631/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9631/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> 

[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: HADOOP-13214.2.patch

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Patch Available  (was: Open)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: (was: HADOOP-13214.2.patch)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Open  (was: Patch Available)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Attachment: HADOOP-12893.008.patch

Thanks so much [~andrew.wang] for testing out the patch and putting up a new 
rev.
I've applied the new patch 7, and attaching patch 8 which has below 
improvements:
- LICENSE.txt and NOTICE.txt under hadoop-resource-bundle doesn't need to be 
checked in
- Moved the copy L logic from hadoop-resource-bundle/pom.xml to 
hadoop/pom.xml to make it work
- IIUC on http://www.apache.org/dev/licensing-howto.html#mod-notice , MIT 
license doesn't need copyright. So grouped mockito and slf4j back, and removed 
the original twitter copyright which seems unnecessary.

Also, the 3 jars that's reported missing actually does not show up in 
hadoop-dist after maven package and hence not bundled, so IMO they're ok as-is.

Tested patch 8 builds locally, verified jars under {{hadoop-dist/target}} all 
contain L

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-05-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12893:
---
Status: Patch Available  (was: Open)

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Xiao Chen
>Priority: Blocker
> Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, 
> HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.006.patch, 
> HADOOP-12893.007.patch, HADOOP-12893.008.patch, HADOOP-12893.01.patch
>
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package

2016-05-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12564:
---
Fix Version/s: (was: 3.0.0-alpha1)

>  Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package
> ---
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch, 
> MAPREDUCE-6505-6.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308375#comment-15308375
 ] 

Hadoop QA commented on HADOOP-13214:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 
4 new + 320 unchanged - 0 fixed = 324 total (was 320) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 51s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807217/HADOOP-13214.2.patch |
| JIRA Issue | HADOOP-13214 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 495d766c3d5e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9629/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9629/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9629/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  

[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308299#comment-15308299
 ] 

Hadoop QA commented on HADOOP-13171:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 41s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 25s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 20s 
{color} | {color:red} root: The patch generated 15 new + 92 unchanged - 48 
fixed = 107 total (was 140) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 8s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} 

[jira] [Commented] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308284#comment-15308284
 ] 

Daniel Templeton commented on HADOOP-13224:
---

+1 (non-binding).  Thanks for catching and fixing that, [~ajisakaa]!

> Grep job in Single Cluster document fails
> -
>
> Key: HADOOP-13224
> URL: https://issues.apache.org/jira/browse/HADOOP-13224
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-13224.01.patch
>
>
> In single cluster setup document, the grep job fails.
> {noformat}
> 16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
> attempt_1464707543608_0005_m_30_2, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:823)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:810)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:799)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1031)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:288)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:785)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is 
> not a file: /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> 

[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-31 Thread Benoit Sigoure (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308275#comment-15308275
 ] 

Benoit Sigoure commented on HADOOP-12910:
-

Yes {{Deferred}} is 3-clause BSD, see the license @ 
https://github.com/OpenTSDB/async/blob/master/COPYING

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Patch Available  (was: Open)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: HADOOP-13214.2.patch

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: (was: HADOOP-13214.2.patch)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Open  (was: Patch Available)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308213#comment-15308213
 ] 

Hadoop QA commented on HADOOP-13214:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-13214 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807214/HADOOP-13214.2.patch |
| JIRA Issue | HADOOP-13214 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9628/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308214#comment-15308214
 ] 

Hadoop QA commented on HADOOP-13155:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 26s 
{color} | {color:red} root: The patch generated 2 new + 232 unchanged - 6 fixed 
= 234 total (was 238) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 16s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 28s 
{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 20s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807203/HADOOP-13155.06.patch 
|
| JIRA Issue | HADOOP-13155 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e84f95c15e43 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9624/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Patch Available  (was: Open)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: HADOOP-13214.2.patch

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: (was: HADOOP-13214.2.patch)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Open  (was: Patch Available)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308194#comment-15308194
 ] 

Hadoop QA commented on HADOOP-13214:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-13214 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807208/HADOOP-13214.2.patch |
| JIRA Issue | HADOOP-13214 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9627/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12291) Add support for nested groups in LdapGroupsMapping

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308186#comment-15308186
 ] 

Hadoop QA commented on HADOOP-12291:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12291 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803871/HADOOP-12291.006.patch
 |
| JIRA Issue | HADOOP-12291 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9626/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add support for nested groups in LdapGroupsMapping
> --
>
> Key: HADOOP-12291
> URL: https://issues.apache.org/jira/browse/HADOOP-12291
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Gautam Gopalakrishnan
>Assignee: Esther Kundin
>  Labels: features, patch
> Fix For: 2.8.0
>
> Attachments: HADOOP-12291.001.patch, HADOOP-12291.002.patch, 
> HADOOP-12291.003.patch, HADOOP-12291.004.patch, HADOOP-12291.005.patch, 
> HADOOP-12291.006.patch
>
>
> When using {{LdapGroupsMapping}} with Hadoop, nested groups are not 
> supported. So for example if user {{jdoe}} is part of group A which is a 
> member of group B, the group mapping currently returns only group A.
> Currently this facility is available with {{ShellBasedUnixGroupsMapping}} and 
> SSSD (or similar tools) but would be good to have this feature as part of 
> {{LdapGroupsMapping}} directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-branch-2-012.patch

Patch 012

this strips out the changes to the spec and test for the listFiles call.

It does retain the move of the relevant test utility classes (dir tree 
creation, walking) into hadoop-common, as they will be needed there; putting 
them there now means no need to revert any bits of that patch later

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch, 
> HADOOP-13171-branch-2-012.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Labels: easyfix newbie patch  (was: easyfix newbie)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie, patch
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Status: Open  (was: Patch Available)

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Patch Available  (was: Open)

The new (.2) patch adds unit tests for the issue being fixed. Tested with 
{{hadoop-common$ mvn -Dtest=TestSequenceFile test}} The test passes, as the 
patch also includes the fix in {{SequenceFile.java}}, but fails without it.

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Status: Open  (was: Patch Available)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: HADOOP-13214.2.patch

Added unit tests for the issue being fixed.

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFileReader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-31 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Description: Calling {{sync(0); next(...);}} on a block-compressed 
{{SequenceFile.Reader}} that has already been used may not yield the key-values 
at the beginning, but those following the previous position. The issue is 
caused by {{sync(0)}} not releasing previously buffered keys and values. The 
issue was introduced by HADOOP-6196.  (was: Calling {{sync(0); next(...);}} on 
a block-compressed {{SequenceFileReader}} that has already been used may not 
yield the key-values at the beginning, but those following the previous 
position. The issue is caused by {{sync(0)}} not releasing previously buffered 
keys and values. The issue was introduced by HADOOP-6196)

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie
> Attachments: HADOOP-13214.2.patch, HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFile.Reader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Status: Patch Available  (was: Open)

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-branch-2-011.patch

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch, HADOOP-13171-branch-2-011.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308136#comment-15308136
 ] 

Steve Loughran commented on HADOOP-13171:
-

missed a commit to the ProgressableProgressListener...

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308130#comment-15308130
 ] 

Steve Loughran commented on HADOOP-13171:
-

Patch 010   (which is 009+ a typo in the docs fixed)

# scale for directory operations has its own property 
{{scale.test.directory.count}}; set low (2) and documented.
# {{ProgressableProgressListener}}: clean up done
# null checks for {{FileSystem.statistics}}. I think I'd concluded from all the 
null checks elsewhere that it could be null, so included them. But that's only 
for the input & output streams, isn't it? Pulled.
# Iterator around {{Collections.unmodifiableSet}}. Nicely spotted —I hadn't 
even thought of that. Done
# {{S3ATestUtils#createSubdirs}} javadocs. I've pulled that method into 
{{ContractTestUtils}}; looks like the cut and paste fixed it.
# temp dir for {{testCostOfCopyFromLocalFile}} : using 
{{GenericTestUtils.getTestDir("tmp")}} for a temp dir.


> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-branch-2-010.patch

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch, 
> HADOOP-13171-branch-2-010.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13225) Allow java to be started with numactl

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308117#comment-15308117
 ] 

Hadoop QA commented on HADOOP-13225:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 9s 
{color} | {color:red} The patch generated 11 new + 492 unchanged - 4 fixed = 
503 total (was 496) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:babe025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805970/HDFS-10370-branch-2.004.patch
 |
| JIRA Issue | HADOOP-13225 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 1597dfc804dd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / cbf8786 |
| shellcheck | v0.4.4 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9623/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9623/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9623/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12911) Upgrade Hadoop MiniKDC with Kerby

2016-05-31 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308115#comment-15308115
 ] 

Andrew Wang commented on HADOOP-12911:
--

Given this is targeted at 3.0 and not 2.x, I think we should start by removing 
all the related dependencies. As it is, I'm not sure any of those apps are 
going to compile against 3.0.

> Upgrade Hadoop MiniKDC with Kerby
> -
>
> Key: HADOOP-12911
> URL: https://issues.apache.org/jira/browse/HADOOP-12911
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12911-v1.patch, HADOOP-12911-v2.patch, 
> HADOOP-12911-v3.patch, HADOOP-12911-v4.patch, HADOOP-12911-v5.patch, 
> HADOOP-12911-v6.patch, HADOOP-12911-v7.patch, HaDOOP-12911-v8.patch
>
>
> As discussed in the mailing list, we’d like to introduce Apache Kerby into 
> Hadoop. Initially it’s good to start with upgrading Hadoop MiniKDC with Kerby 
> offerings. Apache Kerby (https://github.com/apache/directory-kerby), as an 
> Apache Directory sub project, is a Java Kerberos binding. It provides a 
> SimpleKDC server that borrowed ideas from MiniKDC and implemented all the 
> facilities existing in MiniKDC. Currently MiniKDC depends on the old Kerberos 
> implementation in Directory Server project, but the implementation is stopped 
> being maintained. Directory community has a plan to replace the 
> implementation using Kerby. MiniKDC can use Kerby SimpleKDC directly to avoid 
> depending on the full of Directory project. Kerby also provides nice identity 
> backends such as the lightweight memory based one and the very simple json 
> one for easy development and test environments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS

2016-05-31 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13155:
---
Attachment: HADOOP-13155.06.patch

Patch 6 just to make jenkins happy. I know it may need further change based on 
further discussions. :)

> Implement TokenRenewer to renew and cancel delegation tokens in KMS
> ---
>
> Key: HADOOP-13155
> URL: https://issues.apache.org/jira/browse/HADOOP-13155
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, 
> HADOOP-13155.03.patch, HADOOP-13155.04.patch, HADOOP-13155.05.patch, 
> HADOOP-13155.06.patch, HADOOP-13155.pre.patch
>
>
> Service DelegationToken (DT) renewal is done in Yarn by 
> {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}},
>  where it calls {{Token#renew}} and uses ServiceLoader to find the renewer 
> class 
> ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]),
>  and invokes the renew method from it.
> We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence 
> Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the 
> token not being renewed.
> As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} 
> API, but I don't see it invoked in hadoop code base. KMS does not have any 
> renew hook.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13132) Handle ClassCastException on AuthenticationException in LoadBalancingKMSClientProvider

2016-05-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308096#comment-15308096
 ] 

Hudson commented on HADOOP-13132:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9888 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9888/])
HADOOP-13132. Handle ClassCastException on AuthenticationException in (wang: 
rev bca31fe276ccf7d02b13f25d43c81cca0b0b905b)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/kms/TestLoadBalancingKMSClientProvider.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java


> Handle ClassCastException on AuthenticationException in 
> LoadBalancingKMSClientProvider
> --
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch, 
> HADOOP-13132.003.patch, HADOOP-13132.004.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308082#comment-15308082
 ] 

Hadoop QA commented on HADOOP-13224:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807199/HADOOP-13224.01.patch 
|
| JIRA Issue | HADOOP-13224 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux c7a81a34b9e1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / bca31fe |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9622/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Grep job in Single Cluster document fails
> -
>
> Key: HADOOP-13224
> URL: https://issues.apache.org/jira/browse/HADOOP-13224
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-13224.01.patch
>
>
> In single cluster setup document, the grep job fails.
> {noformat}
> 16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
> attempt_1464707543608_0005_m_30_2, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> 

[jira] [Updated] (HADOOP-13171) Add StorageStatistics to S3A; instrument some more operations

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13171:

Attachment: HADOOP-13171-branch-2-009.patch

> Add StorageStatistics to S3A; instrument some more operations
> -
>
> Key: HADOOP-13171
> URL: https://issues.apache.org/jira/browse/HADOOP-13171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13171-branch-2-001.patch, 
> HADOOP-13171-branch-2-002.patch, HADOOP-13171-branch-2-003.patch, 
> HADOOP-13171-branch-2-004.patch, HADOOP-13171-branch-2-005.patch, 
> HADOOP-13171-branch-2-006.patch, HADOOP-13171-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch, HADOOP-13171-branch-2-009.patch
>
>
> Add {{StorageStatistics}} support to S3A, collecting the same metrics as the 
> instrumentation, but sharing across all instances.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13132) Handle ClassCastException on AuthenticationException in LoadBalancingKMSClientProvider

2016-05-31 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308070#comment-15308070
 ] 

Wei-Chiu Chuang commented on HADOOP-13132:
--

Many thanks to [~andrew.wang] and [~xiaochen] for the review!

> Handle ClassCastException on AuthenticationException in 
> LoadBalancingKMSClientProvider
> --
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch, 
> HADOOP-13132.003.patch, HADOOP-13132.004.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13225) Allow java to be started with numactl

2016-05-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308066#comment-15308066
 ] 

Allen Wittenauer commented on HADOOP-13225:
---

(Hmm, I need to update that doc post-dynamic subcommands support and 
post-hadoop-tools rework.)

> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13225) Allow java to be started with numactl

2016-05-31 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13225:
--
Status: Patch Available  (was: Open)

> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13225) Allow java to be started with numactl

2016-05-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308059#comment-15308059
 ] 

Allen Wittenauer commented on HADOOP-13225:
---

bq. I made a small mention in the description about this; I'm not sure how it 
would work with jsvc.

Sorry, I missed this.  I'm not sure why one couldn't just use numactl to launch 
jsvc given the man page states:

numactl runs processes with a specific NUMA scheduling or memory placement 
policy. *The policy is set for command and inherited by all of its children.*

I'm fairly certain that the java DN process would be considered a child of jsvc 
in this context. Things get trickier if some other method is being used (e.g., 
HADOOP_SECURE_COMMAND is defined). Chances are good that those users are either 
non-Linux or are using function overrides anyway.

Also:

* Looks like you didn't have contributor set for HADOOP. Fixed.
* I've moved the JIRA for you.
* Coding rules for bash are generally covered here: 
https://wiki.apache.org/hadoop/UnixShellScriptProgrammingGuide


> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13224:
---
Status: Patch Available  (was: Open)

> Grep job in Single Cluster document fails
> -
>
> Key: HADOOP-13224
> URL: https://issues.apache.org/jira/browse/HADOOP-13224
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-13224.01.patch
>
>
> In single cluster setup document, the grep job fails.
> {noformat}
> 16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
> attempt_1464707543608_0005_m_30_2, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:823)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:810)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:799)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1031)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:288)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:785)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is 
> not a file: /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> 

[jira] [Updated] (HADOOP-13132) Handle ClassCastException on AuthenticationException in LoadBalancingKMSClientProvider

2016-05-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13132:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

LGTM, committed for 2.8.0, thanks for the contribution Wei-Chiu and the review 
Xiao!

> Handle ClassCastException on AuthenticationException in 
> LoadBalancingKMSClientProvider
> --
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch, 
> HADOOP-13132.003.patch, HADOOP-13132.004.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13132) Handle ClassCastException on AuthenticationException in LoadBalancingKMSClientProvider

2016-05-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13132:
-
Summary: Handle ClassCastException on AuthenticationException in 
LoadBalancingKMSClientProvider  (was: LoadBalancingKMSClientProvider 
ClassCastException on AuthenticationException)

> Handle ClassCastException on AuthenticationException in 
> LoadBalancingKMSClientProvider
> --
>
> Key: HADOOP-13132
> URL: https://issues.apache.org/jira/browse/HADOOP-13132
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Miklos Szurap
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-13132.001.patch, HADOOP-13132.002.patch, 
> HADOOP-13132.003.patch, HADOOP-13132.004.patch
>
>
> An Oozie job with a single shell action fails (may not be important, but if 
> you needs the exact details I can provide them) with an error message coming 
> from NodeManager:
> {code}
> 2016-05-10 11:10:14,290 ERROR 
> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread 
> Thread[LogAggregationService #652,5,main] threw an Exception.
> java.lang.ClassCastException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException 
> cannot be cast to java.security.GeneralSecurityException
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:189)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1419)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1521)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:108)
> at org.apache.hadoop.fs.Hdfs.createInternal(Hdfs.java:59)
> at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:577)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:683)
> at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:679)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.create(FileContext.java:679)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:382)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter$1.run(AggregatedLogFormat.java:377)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
> at 
> org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogWriter.(AggregatedLogFormat.java:376)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.uploadLogsForContainers(AppLogAggregatorImpl.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.doAppLogAggregation(AppLogAggregatorImpl.java:456)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.AppLogAggregatorImpl.run(AppLogAggregatorImpl.java:421)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.LogAggregationService$2.run(LogAggregationService.java:384)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The unsafe cast is here:
> https://github.com/apache/hadoop/blob/2e1d0ff4e901b8313c8d71869735b94ed8bc40a0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/LoadBalancingKMSClientProvider.java#L189
> Because of this ClassCastException:
> - an uncaught exception is raised
> - we do not see the exact "caused by" exception/message
> - the oozie job fails
> - YARN logs are not reported/saved



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-13224:
---
Attachment: HADOOP-13224.01.patch

v1 patch: Copy only the xml files instead of all the files/directories to fix 
the failure and make the operation consistent with the standalone operation.

> Grep job in Single Cluster document fails
> -
>
> Key: HADOOP-13224
> URL: https://issues.apache.org/jira/browse/HADOOP-13224
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-13224.01.patch
>
>
> In single cluster setup document, the grep job fails.
> {noformat}
> 16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
> attempt_1464707543608_0005_m_30_2, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:823)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:810)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:799)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1031)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:288)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:785)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is 
> not a file: /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> 

[jira] [Assigned] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-13224:
--

Assignee: Akira AJISAKA

> Grep job in Single Cluster document fails
> -
>
> Key: HADOOP-13224
> URL: https://issues.apache.org/jira/browse/HADOOP-13224
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>
> In single cluster setup document, the grep job fails.
> {noformat}
> 16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
> attempt_1464707543608_0005_m_30_2, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:823)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:810)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:799)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1031)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:288)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:785)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is 
> not a file: /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> 

[jira] [Updated] (HADOOP-13225) Allow java to be started with numactl

2016-05-31 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13225:
--
Assignee: Dave Marion

> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13225) Allow DataNode to be started with numactl

2016-05-31 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer moved HDFS-10370 to HADOOP-13225:
--

   Assignee: (was: Dave Marion)
Component/s: (was: datanode)
 scripts
 Issue Type: New Feature  (was: Improvement)
Key: HADOOP-13225  (was: HDFS-10370)
Project: Hadoop Common  (was: Hadoop HDFS)

> Allow DataNode to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13225) Allow java to be started with numactl

2016-05-31 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13225:
--
Summary: Allow java to be started with numactl  (was: Allow DataNode to be 
started with numactl)

> Allow java to be started with numactl
> -
>
> Key: HADOOP-13225
> URL: https://issues.apache.org/jira/browse/HADOOP-13225
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch, HDFS-10370-branch-2.004.patch, HDFS-10370.004.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15308019#comment-15308019
 ] 

Akira AJISAKA commented on HADOOP-13224:


HADOOP-11485 added shellprofile.d directory into etc/hadoop directory, so the 
job fails. In the document, copy etc/hadoop to hdfs://user/username/input and 
use hdfs://user/username/input as the input of the grep job.

> Grep job in Single Cluster document fails
> -
>
> Key: HADOOP-13224
> URL: https://issues.apache.org/jira/browse/HADOOP-13224
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>
> In single cluster setup document, the grep job fails.
> {noformat}
> 16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
> attempt_1464707543608_0005_m_30_2, Status : FAILED
> Error: java.io.FileNotFoundException: Path is not a file: 
> /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
>   at 
> org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:823)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:810)
>   at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:799)
>   at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1031)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:288)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:284)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:785)
>   at 
> org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is 
> not a file: /user/aajisaka/input/shellprofile.d
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
>   at 
> 

[jira] [Created] (HADOOP-13224) Grep job in Single Cluster document fails

2016-05-31 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13224:
--

 Summary: Grep job in Single Cluster document fails
 Key: HADOOP-13224
 URL: https://issues.apache.org/jira/browse/HADOOP-13224
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira AJISAKA


In single cluster setup document, the grep job fails.
{noformat}
16/06/01 00:21:10 INFO mapreduce.Job: Task Id : 
attempt_1464707543608_0005_m_30_2, Status : FAILED
Error: java.io.FileNotFoundException: Path is not a file: 
/user/aajisaka/input/shellprofile.d
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:661)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2423)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2419)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2419)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
at 
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:823)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:810)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:799)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1031)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:288)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:284)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:296)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:785)
at 
org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:85)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1755)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: 
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): Path is 
not a file: /user/aajisaka/input/shellprofile.d
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:78)
at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:64)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getBlockLocations(FSDirStatAndListingOp.java:174)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1740)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:710)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:381)
at 

[jira] [Commented] (HADOOP-11820) aw jira testing, ignore

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307985#comment-15307985
 ] 

Hadoop QA commented on HADOOP-11820:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
00s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} machack {color} | {color:blue} 0m 00s 
{color} | {color:blue} Applied YARN-5121 so that YARN native on OS X works 
{color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 03s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 55s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 128m 03s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.crypto.TestCryptoCodec |
|   | hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec |
|   | hadoop.fs.TestSymlinkLocalFSFileContext |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
|   | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.net.unix.TestDomainSocket |
|   | hadoop.fs.TestEnhancedByteBufferAccess |
|   | hadoop.hdfs.client.impl.TestBlockReaderFactory |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestScrLazyPersistFiles |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.TestDFSInputStream |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.TestParallelShortCircuitRead |
|   | hadoop.hdfs.TestParallelShortCircuitReadNoChecksum |
|   | hadoop.hdfs.TestParallelShortCircuitReadUnCached |
|   | hadoop.hdfs.TestParallelUnixDomainRead |
|   | hadoop.hdfs.TestRead |
|   | hadoop.tracing.TestTracingShortCircuitLocalRead |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806392/0001-domainsocket.patch
 |
| JIRA Issue | HADOOP-11820 |
| Optional Tests |  compile  cc  javac  unit  mvninstall  |
| uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu 
Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | 
/Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-63c5b20/precommit/personality/hadoop.sh
 |
| git revision | trunk / 93d8a7f |
| Default Java | 1.8.0_74 |
| unit | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/30/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/Precommit-HADOOP-OSX/30/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/Precommit-HADOOP-OSX/30/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 

[jira] [Updated] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2016-05-31 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-11229:

Description: 
Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}

jsp should be closed upon return from setupDistCacheEmulation().

  was:
Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}
jsp should be closed upon return from setupDistCacheEmulation().


> JobStoryProducer is not closed upon return from 
> Gridmix#setupDistCacheEmulation()
> -
>
> Key: HADOOP-11229
> URL: https://issues.apache.org/jira/browse/HADOOP-11229
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11229.v3.patch, HADOOP-11229_001.patch, 
> HADOOP-11229_002.patch
>
>
> Here is related code:
> {code}
>   JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
>   exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
> {code}
> jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307757#comment-15307757
 ] 

Hadoop QA commented on HADOOP-13131:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 20s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s 
{color} | {color:red} root: The patch generated 15 new + 91 unchanged - 48 
fixed = 106 total (was 139) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 13s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 33s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Commented] (HADOOP-12991) Conflicting default ports in DelegateToFileSystem

2016-05-31 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307748#comment-15307748
 ] 

Kai Sasaki commented on HADOOP-12991:
-

[~brahmareddy] Could you review this when you have time?

> Conflicting default ports in DelegateToFileSystem
> -
>
> Key: HADOOP-12991
> URL: https://issues.apache.org/jira/browse/HADOOP-12991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Kevin Hogeland
>Assignee: Kai Sasaki
> Attachments: HADOOP-12991.01.patch
>
>
> HADOOP-12304 introduced logic to ensure that the {{DelegateToFileSystem}} 
> constructor sets the default port to -1:
> {code:title=DelegateToFileSystem.java}
>   protected DelegateToFileSystem(URI theUri, FileSystem theFsImpl,
>   Configuration conf, String supportedScheme, boolean authorityRequired)
>   throws IOException, URISyntaxException {
> super(theUri, supportedScheme, authorityRequired, 
> getDefaultPortIfDefined(theFsImpl));
> fsImpl = theFsImpl;
> fsImpl.initialize(theUri, conf);
> fsImpl.statistics = getStatistics();
>   }
>   private static int getDefaultPortIfDefined(FileSystem theFsImpl) {
> int defaultPort = theFsImpl.getDefaultPort();
> return defaultPort != 0 ? defaultPort : -1;
>   }
> {code}
> However, {{DelegateToFileSystem#getUriDefaultPort}} returns 0:
> {code:title=DelegateToFileSystem.java}
>   public int getUriDefaultPort() {
> return 0;
>   }
> {code}
> This breaks {{AbstractFileSystem#checkPath}}:
> {code:title=AbstractFileSystem.java}
> int thisPort = this.getUri().getPort(); // If using DelegateToFileSystem, 
> this is -1
> int thatPort = uri.getPort(); // This is -1 by default in java.net.URI
> if (thatPort == -1) {
>   thatPort = this.getUriDefaultPort();  // Sets thatPort to 0
> }
> if (thisPort != thatPort) {
>   throw new InvalidPathException("Wrong FS: " + path + ", expected: "
>   + this.getUri());
> }
> {code}
> Which breaks any subclasses of {{DelegateToFileSystem}} that don't specify a 
> port (S3n, Wasb(s)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13132) LoadBalancingKMSClientProvider ClassCastException on AuthenticationException

2016-05-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307705#comment-15307705
 ] 

Hadoop QA commented on HADOOP-13132:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 2s {color} | 
{color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 8s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12807140/HADOOP-13132.004.patch
 |
| JIRA Issue | HADOOP-13132 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0dff262239cf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 93d8a7f |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9620/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/9620/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9620/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9620/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LoadBalancingKMSClientProvider ClassCastException on AuthenticationException
> 
>
> Key: HADOOP-13132
> 

[jira] [Created] (HADOOP-13223) winutils.exe is an abomination and should be killed with an axe.

2016-05-31 Thread john lilley (JIRA)
john lilley created HADOOP-13223:


 Summary: winutils.exe is an abomination and should be killed with 
an axe.
 Key: HADOOP-13223
 URL: https://issues.apache.org/jira/browse/HADOOP-13223
 Project: Hadoop Common
  Issue Type: Improvement
  Components: bin
Affects Versions: 2.6.0
 Environment: Microsoft Windows, all versions
Reporter: john lilley


winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
"work" on Windows platforms, because the NativeIO libraries aren't implemented 
there.  Rather than building a DLL that makes native OS calls, the creators of 
winutils.exe must have decided that it would be more expedient to create an EXE 
to carry out file system operations in a linux-like fashion.  Unfortunately, 
like many stopgap measures in software, this one has persisted well beyond its 
expected lifetime and usefulness.  My team creates software that runs on 
Windows and Linux, and winutils.exe is probably responsible for 20% of all 
issues we encounter, both during development and in the field.

Problem #1 with winutils.exe is that it is simply missing from many popular 
distros and/or the client-side software installation for said distros, when 
supplied, fails to install winutils.exe.  Thus, as software developers, we are 
forced to pick one version and distribute and install it with our software.

Which leads to problem #2: winutils.exe are not always compatible.  In 
particular, MapR MUST have its winutils.exe in the system path, but doing so 
breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
and maintaining test environments that work with all of the Hadoop distros we 
want to test unnecessarily tedious and error-prone.

Problem #3 is that the mechanism by which you inform the Hadoop client software 
where to find winutils.exe is poorly documented and fragile.  First, it can be 
in the PATH.  If it is in the PATH, that is where it is found.  However, the 
documentation, such as it is, makes no mention of this, and instead says that 
you should set the HADOOP_HOME environment variable, which does NOT override 
the winutils.exe found in your system PATH.

Which leads to problem #4: There is no logging that says where winutils.exe was 
actually found and loaded.  Because of this, fixing problems of finding the 
wrong winutils.exe are extremely difficult.

Problem #5 is that most of the time, such as when accessing straight up HDFS 
and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
messages complain about its absence.  When we are trying to diagnose an obscure 
issue in Hadoop (of which there are many), the presence of this red herring 
leads to all sorts of time wasted until someone on the team points out that 
winutils.exe is not the problem, at least not this time.

Problem #6 is that errors and stack traces from issues involving winutils.exe 
are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
through bitter experience is one able to connect the dots from "ProcessBuilder 
is the last thing on the stack" to "something is wrong with winutils.exe".

Note that none of these involve running Hadoop on Windows.  They are only 
encountered when using Hadoop client libraries to access a cluster from Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13131) Add tests to verify that S3A supports SSE-S3 encryption

2016-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13131:

Status: Patch Available  (was: Open)

tested against AWS ireland

> Add tests to verify that S3A supports SSE-S3 encryption
> ---
>
> Key: HADOOP-13131
> URL: https://issues.apache.org/jira/browse/HADOOP-13131
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13131-001.patch, HADOOP-13131-002.patch, 
> HADOOP-13131-003.patch, HADOOP-13131-004.patch, HADOOP-13131-005.patch, 
> HADOOP-13131-branch-2-006.patch, HADOOP-13131-branch-2-007.patch, 
> HADOOP-13171-branch-2-008.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Although S3A claims to support server-side S3 encryption (and does, if you 
> set the option), we don't have any test to verify this. Of course, as the 
> encryption is transparent, it's hard to test.
> Here's what I propose
> # a test which sets encryption = AES256; expects things to work as normal.
> # a test which sets encyption = DES and expects any operation creating a file 
> or directory to fail with a 400 "bad request" error



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13221) s3a create() doesn't check for a parent path being a file

2016-05-31 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15307667#comment-15307667
 ] 

Steve Loughran commented on HADOOP-13221:
-

HADOOP-12667 adds a createNonRecursive operation (and test), which fails if the 
parent isn' there. The S3a create code could support this option too; albeit 
with a check for the parent existing as a directory, not a file. the current 
create() code would have to b factored out into something which both create() 
and createNonRecursive could call, the startup check being slightly different.

Notable that the {{createNonRecursive()}}, because it explicitly only checks 
the parent dir, is guaranteed to be O(1); with 1..3 HTTP requests made of S3 to 
guarantee that the parent dir exists and is a directory

> s3a create() doesn't check for a parent path being a file
> -
>
> Key: HADOOP-13221
> URL: https://issues.apache.org/jira/browse/HADOOP-13221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Rajesh Balamohan
>
> Seen in a code review. Notable that if true, this got by all the FS contract 
> tests —showing we missed a couple.
> {{S3AFilesystem.create()}} does not examine its parent paths to verify that 
> there does not exist one which is a file. It looks for the destination path 
> if overwrite=false (see HADOOP-13188 for issues there), but it doesn't check 
> the parent for not being a file, or the parent of that path.
> It must go up the tree, verifying that either a path does not exist, or that 
> the path is a directory. The scan can stop at the first entry which is is a 
> directory, thus the operation is O(empty-directories) and not O(directories).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >