[jira] [Commented] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14156093#comment-14156093
 ] 

Andrew Wang commented on HADOOP-11151:
--

I also ran apache-rat:check successfully with this patch applied, so not sure 
what's up with that.

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch, 
> HADOOP-11151.3.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14156089#comment-14156089
 ] 

Andrew Wang commented on HADOOP-11151:
--

Hi Arun, thanks for revving. Few more comments:

- Nit: can make authRetry final
- Could you comment on why authToken needs to be volatile?
- In the new if statement, I don't follow a few things. Why is the special case 
we need to handle a "success" error code rather than an error one? Also, we 
don't need to do another retry in this case? The operation succeeded even 
though we need to refresh the authToken?

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch, 
> HADOOP-11151.3.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10741) A lightweight WebHDFS client library

2014-10-01 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14156055#comment-14156055
 ] 

Jakob Homan commented on HADOOP-10741:
--

The REST protocol exposed by the NameNode and consumed by the WebHDFS 
FileSystem implementation is extremely valuable. It's the easiest point of 
access for non-JVM clients.  A non-oah.FileSystem consumer implementation will 
exist, whether it's in Hadoop proper or out in github limbo.  It'd be better to 
have the library here to avoid bitrot of the client, drift in implementations 
and duplicated work.  Going further (and in a future JIRA), we should look at 
codifying the server-side REST protocol through something like 
[RAML|http://raml.org/] or [Swagger|https://helloreverb.com/developers/swagger] 
so that it's easy for other systems to offer access through it, in the same way 
that other implementations of oah.FileSystem make those systems accessible.

> A lightweight WebHDFS client library
> 
>
> Key: HADOOP-10741
> URL: https://issues.apache.org/jira/browse/HADOOP-10741
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mohammad Kamrul Islam
>
> One of the motivations for creating WebHDFS is for applications connecting to 
> HDFS from outside the cluster.  In order to do so, users have to either
> # install Hadoop and use WebHdfsFileSsytem, or
> # develop their own client using the WebHDFS REST API.
> For #1, it is very difficult to manage and unnecessarily complicated for 
> other applications since Hadoop is not a lightweight library.  For #2, it is 
> not easy to deal with security and handle transient errors.
> Therefore, we propose adding a lightweight WebHDFS client as a separated 
> library which does not depend on Common and HDFS.  The client can be packaged 
> as a standalone jar.  Other applications simply add the jar to their 
> classpath for using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10741) A lightweight WebHDFS client library

2014-10-01 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155967#comment-14155967
 ] 

Sanjay Radia commented on HADOOP-10741:
---


 I see part of the counter argument being that folks using rest are doing it 
for one of two reasons.
1) Protocol compatibility - this was the orignal motivation in the past when 
HDFS protocols were not compatible across some versions, This has been fixed.
2) Want a lightweight client that is independent of any version of HDFS. 
However as Mohammad has pointed out in his description, ustomer using web hdfs 
rest protocol find that managing failure, auth, etc is painful, Hence a library 
would help.
I can see Andrew's argument of putting it outside Hadoop common to better 
satisfy (2). We can decide the exact mechanism to distribute this library 
later.  
Note the goal of this library is *not* another FS API but a client side library 
that wraps hdfs's rest protocol. It is valid question to see if this API should 
mimic that actual Hadoop FS API?
Mohammad please post the patch. We will figure out the mechanism of 
distributing that library separately. Thanks.

> A lightweight WebHDFS client library
> 
>
> Key: HADOOP-10741
> URL: https://issues.apache.org/jira/browse/HADOOP-10741
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: tools
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mohammad Kamrul Islam
>
> One of the motivations for creating WebHDFS is for applications connecting to 
> HDFS from outside the cluster.  In order to do so, users have to either
> # install Hadoop and use WebHdfsFileSsytem, or
> # develop their own client using the WebHDFS REST API.
> For #1, it is very difficult to manage and unnecessarily complicated for 
> other applications since Hadoop is not a lightweight library.  For #2, it is 
> not easy to deal with security and handle transient errors.
> Therefore, we propose adding a lightweight WebHDFS client as a separated 
> library which does not depend on Common and HDFS.  The client can be packaged 
> as a standalone jar.  Other applications simply add the jar to their 
> classpath for using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155934#comment-14155934
 ] 

Chris Nauroth commented on HADOOP-10809:


Hi, [~ehans].  There has been some trouble with patches containing Windows line 
endings since the recent migration from Subversion to Git.  I'll investigate 
for you.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.07.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155932#comment-14155932
 ] 

Eric Hanson commented on HADOOP-10809:
--

[~cnauroth] Do you have any idea what's going wrong here and how I can fix it 
to make the patch apply? My enlistment was cloned with the command "git clone 
git://git.apache.org/hadoop-common.git" and I created the patch from inside the 
hadoop-common folder that was created when cloning.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.07.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155924#comment-14155924
 ] 

Hadoop QA commented on HADOOP-10809:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672463/HADOOP-10809.07.patch
  against trunk revision 9e40de6.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4849//console

This message is automatically generated.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.07.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HADOOP-10809:
-
Attachment: HADOOP-10809.07.patch

I am stumped as to why the patch 06 did not apply. It applies fine on the trunk 
for me using patch -p1. 

I created this new patch was created like so:

git diff --no-prefix trunk HEAD > C:\temp\HADOOP-10809.07.patch

So it should apply with -p0. But that shouldn't matter. Anyway, I'm trying 
again.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.07.patch, HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11158) KerberosAuthenticationHandler should have reasonable default for

2014-10-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-11158:

Status: Open  (was: Patch Available)

> KerberosAuthenticationHandler should have reasonable default for 
> -
>
> Key: HADOOP-11158
> URL: https://issues.apache.org/jira/browse/HADOOP-11158
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-11158.patch
>
>
> The {{KerberosAuthenticationHandler}} currently defines no default value for 
> the "{{.kerberos.name.rules}}" config key, which means that users who use 
> this class and don't set this setting will get an NPE. We should set the 
> default for this "DEFAULT" like it is for the other Kerberos name mapping 
> properties in Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11158) KerberosAuthenticationHandler should have reasonable default for

2014-10-01 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155891#comment-14155891
 ] 

Aaron T. Myers commented on HADOOP-11158:
-

[~tucu00] is on vacation but he told me offline that this patch won't quite 
work as-is, so canceling it for now. Hopefully he can comment with some details 
as to what we should do when he returns.

> KerberosAuthenticationHandler should have reasonable default for 
> -
>
> Key: HADOOP-11158
> URL: https://issues.apache.org/jira/browse/HADOOP-11158
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-11158.patch
>
>
> The {{KerberosAuthenticationHandler}} currently defines no default value for 
> the "{{.kerberos.name.rules}}" config key, which means that users who use 
> this class and don't set this setting will get an NPE. We should set the 
> default for this "DEFAULT" like it is for the other Kerberos name mapping 
> properties in Hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11084) jenkins patchprocess links are broken

2014-10-01 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155811#comment-14155811
 ] 

Tsuyoshi OZAWA commented on HADOOP-11084:
-

Hi, the links about findbugs are sometimes broken on some jiras(e.g. YARN-2312, 
HADOOP-11032). On my local environment, I confirmed that I cannot find the 
findbugs warning with the patches. Do you have any idea about this problem?

> jenkins patchprocess links are broken
> -
>
> Key: HADOOP-11084
> URL: https://issues.apache.org/jira/browse/HADOOP-11084
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Reporter: Colin Patrick McCabe
>Assignee: Arpit Agarwal
> Fix For: 3.0.0
>
> Attachments: HADOOP-11084.001.patch, HADOOP-11084.002.patch, 
> HADOOP-11084.003.patch
>
>
> jenkins patchprocess links of the form 
> {{https://builds.apache.org/job/PreCommit-HADOOP-Build///artifact/trunk/patchprocess/diffJavadocWarnings.txt}}
>  and so forth are dead links.  We should fix them to reflect the new source 
> layout after git.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155800#comment-14155800
 ] 

Hadoop QA commented on HADOOP-11151:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672444/HADOOP-11151.3.patch
  against trunk revision 52bbe0f.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

  org.apache.hadoop.crypto.key.kms.server.TestKMS

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4848//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4848//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4848//console

This message is automatically generated.

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch, 
> HADOOP-11151.3.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11151:
-
Attachment: HADOOP-11151.3.patch

Uploading updated patch addressing feedback suggestions

Thanks for the review [~andrew.wang]
agreed.. we don't really need to do more than 1 retry.. but thought it safer to 
just keep it configurable.. i've changed default to 1

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch, 
> HADOOP-11151.3.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155651#comment-14155651
 ] 

Hadoop QA commented on HADOOP-10809:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672427/HADOOP-10809.06.patch
  against trunk revision dd1b8f2.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4847//console

This message is automatically generated.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155648#comment-14155648
 ] 

Hadoop QA commented on HADOOP-11032:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672398/HADOOP-11032.3.patch
  against trunk revision 875aa79.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core:

  org.apache.hadoop.hdfs.server.namenode.TestAuditLogs

{color:red}-1 contrib tests{color}.  The patch failed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4846//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4846//artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4846//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4846//console

This message is automatically generated.

> Replace use of Guava Stopwatch with Apache StopWatch
> 
>
> Key: HADOOP-11032
> URL: https://issues.apache.org/jira/browse/HADOOP-11032
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gary Steelman
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11032.1.patch, HADOOP-11032.2.patch, 
> HADOOP-11032.3.patch, HADOOP-11032.3.patch, HADOOP-11032.3.patch, 
> HADOOP-11032.3.patch, HADOOP-11032.3.patch
>
>
> This patch reduces Hadoop's dependency on an old version of guava. 
> Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
> working on use v17. 
> To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
> can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
> provides nearly equivalent functionality. apache.commons.lang is already a 
> dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HADOOP-10809:
-
Status: Patch Available  (was: Open)

Patch has Mike's changes plus the page blob and atomic file rename support I 
added over the summer.

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10809) hadoop-azure: page blob support

2014-10-01 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HADOOP-10809:
-
Attachment: HADOOP-10809.06.patch

This patch allows all the Azure file system unit tests to pass. 

> hadoop-azure: page blob support
> ---
>
> Key: HADOOP-10809
> URL: https://issues.apache.org/jira/browse/HADOOP-10809
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Mike Liddell
>Assignee: Eric Hanson
> Attachments: HADOOP-10809.02.patch, HADOOP-10809.03.patch, 
> HADOOP-10809.04.patch, HADOOP-10809.05.patch, HADOOP-10809.06.patch, 
> HADOOP-10809.1.patch
>
>
> Azure Blob Storage provides two flavors: block-blobs and page-blobs.  
> Block-blobs are the general purpose kind that support convenient APIs and are 
> the basis for the Azure Filesystem for Hadoop (see HADOOP-9629).
> Page-blobs use the same namespace as block-blobs but provide a different 
> low-level feature set.  Most importantly, page-blobs can cope with an 
> effectively infinite number of small accesses whereas block-blobs can only 
> tolerate 50K appends before relatively manual rewriting of the data is 
> necessary.  A simple analogy is that page-blobs are like a regular disk and 
> the basic API is like a low-level device driver.
> See http://msdn.microsoft.com/en-us/library/azure/ee691964.aspx for some 
> introductory material.
> The primary driving scenario for page-blob support is for HBase transaction 
> log files which require an access pattern of many small writes.  Additional 
> scenarios can also be supported.
> Configuration:
> The Hadoop Filesystem abstraction needs a mechanism so that file-create can 
> determine whether to create a block- or page-blob.  To permit scenarios where 
> application code doesn't know about the details of azure storage we would 
> like the configuration to be Aspect-style, ie configured by the Administrator 
> and transparent to the application. The current solution is to use hadoop 
> configuration to declare a list of page-blob folders -- Azure Filesystem for 
> Hadoop will create files in these folders using page-blob flavor.  The 
> configuration key is "fs.azure.page.blob.dir", and description can be found 
> in AzureNativeFileSystemStore.java.
> Code changes:
> - refactor of basic Azure Filesystem code to use a general BlobWrapper and 
> specialized BlockBlobWrapper vs PageBlobWrapper
> - introduction of PageBlob support (read, write, etc)
> - miscellaneous changes such as umask handling, implementation of 
> createNonRecursive(), flush/hflush/hsync.
> - new unit tests.
> Credit for the primary patch: Dexter Bradshaw, Mostafa Elhemali, Eric Hanson, 
> Mike Liddell.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-10-01 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-11032:

Attachment: HADOOP-11032.3.patch

Ran tests and findbugs locally, but I cannot reproduce the faliures. Attaching 
same patch again. 

> Replace use of Guava Stopwatch with Apache StopWatch
> 
>
> Key: HADOOP-11032
> URL: https://issues.apache.org/jira/browse/HADOOP-11032
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gary Steelman
>Assignee: Tsuyoshi OZAWA
> Attachments: HADOOP-11032.1.patch, HADOOP-11032.2.patch, 
> HADOOP-11032.3.patch, HADOOP-11032.3.patch, HADOOP-11032.3.patch, 
> HADOOP-11032.3.patch, HADOOP-11032.3.patch
>
>
> This patch reduces Hadoop's dependency on an old version of guava. 
> Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
> working on use v17. 
> To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
> can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
> provides nearly equivalent functionality. apache.commons.lang is already a 
> dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10388) Pure native hadoop client

2014-10-01 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155436#comment-14155436
 ] 

Colin Patrick McCabe commented on HADOOP-10388:
---

Thanks, [~thanhdo].  But there are some equally important people here: thank 
[~wangzw] for his contributions, and [~abec] and all the other people who have 
reviewed things!

> Pure native hadoop client
> -
>
> Key: HADOOP-10388
> URL: https://issues.apache.org/jira/browse/HADOOP-10388
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: HADOOP-10388
>Reporter: Binglin Chang
>Assignee: Colin Patrick McCabe
> Attachments: 2014-06-13_HADOOP-10388_design.pdf
>
>
> A pure native hadoop client has following use case/advantages:
> 1.  writing Yarn applications using c++
> 2.  direct access to HDFS, without extra proxy overhead, comparing to web/nfs 
> interface.
> 3.  wrap native library to support more languages, e.g. python
> 4.  lightweight, small footprint compare to several hundred MB of JDK and 
> hadoop library with various dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155262#comment-14155262
 ] 

Andrew Wang commented on HADOOP-11151:
--

Cool patch, logic makes sense. Couple small comments though:

* Could we avoid doing getConf().getInt on each call? The Configuration hash 
lookup isn't free.
* Is there a reason why doing more than one retry would fix the issue? Seems if 
we get the same error even after one retry, then something else is wrong.

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155023#comment-14155023
 ] 

Hudson commented on HADOOP-10150:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6163/])
Fix up CHANGES.txt for HDFS-6134, HADOOP-10150 and related JIRAs following 
merge to branch-2 (arp: rev 2ca93d1fbf0fdcd6b4b5a151261052ac106ac9e1)
* hadoop-mapreduce-project/CHANGES.txt
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Hadoop cryptographic file system
> 
>
> Key: HADOOP-10150
> URL: https://issues.apache.org/jira/browse/HADOOP-10150
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: rhino
> Fix For: 2.6.0
>
> Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file 
> system-V2.docx, HADOOP cryptographic file system.pdf, 
> HDFSDataAtRestEncryptionAlternatives.pdf, 
> HDFSDataatRestEncryptionAttackVectors.pdf, 
> HDFSDataatRestEncryptionProposal.pdf, cfs.patch, extended information based 
> on INode feature.patch
>
>
> There is an increasing need for securing data when Hadoop customers use 
> various upper layer applications, such as Map-Reduce, Hive, Pig, HBase and so 
> on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based 
> on HADOOP “FilterFileSystem” decorating DFS or other file systems, and 
> transparent to upper layer applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.Transparent to and no modification required for upper layer 
> applications.
> 2.“Seek”, “PositionedReadable” are supported for input stream of CFS if 
> the wrapped file system supports them.
> 3.Very high performance for encryption and decryption, they will not 
> become bottleneck.
> 4.Can decorate HDFS and all other file systems in Hadoop, and will not 
> modify existing structure of file system, such as namenode and datanode 
> structure if the wrapped file system is HDFS.
> 5.Admin can configure encryption policies, such as which directory will 
> be encrypted.
> 6.A robust key management framework.
> 7.Support Pread and append operations if the wrapped file system supports 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8815) RandomDatum overrides equals(Object) but no hashCode()

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155012#comment-14155012
 ] 

Hudson commented on HADOOP-8815:


FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6163/])
Fixing CHANGES.txt, moving HADOOP-8815 to 2.6.0 release (arp: rev 
bc6ce2cb34a638851d3530ca31979db30a8a50bd)
* hadoop-common-project/hadoop-common/CHANGES.txt


> RandomDatum overrides equals(Object) but no hashCode()
> --
>
> Key: HADOOP-8815
> URL: https://issues.apache.org/jira/browse/HADOOP-8815
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Brandon Li
>Assignee: Brandon Li
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-8815.patch, HADOOP-8815.patch
>
>
> Override equal() but not hashCode() is a violation of the general contract 
> for Object.hashCode will occur, which can have unexpected repercussions when 
> this class is in conjunction with all hash-based collections.
> This test class is used in multiple places, so it may be worth fixing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14155009#comment-14155009
 ] 

Hudson commented on HADOOP-10880:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6163 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6163/])
HADOOP-10880. Move HTTP delegation tokens out of URL querystring to a header. 
(tucu) (arp: rev 6bf16d115637c7761123e3b92186daa675c4769c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestWebDelegationToken.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/token/delegation/web/TestDelegationTokenAuthenticationHandlerWithMocks.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticatedURL.java


> Move HTTP delegation tokens out of URL querystring to a header
> --
>
> Key: HADOOP-10880
> URL: https://issues.apache.org/jira/browse/HADOOP-10880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.6.0
>
> Attachments: HADOOP-10880.patch, HADOOP-10880.patch, 
> HADOOP-10880.patch, HADOOP-10880.patch, HADOOP-10880.patch
>
>
> Following up on a discussion in HADOOP-10799.
> Because URLs are often logged, delegation tokens may end up in LOG files 
> while they are still valid. 
> We should move the tokens to a header.
> We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11145) TestFairCallQueue fails

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154906#comment-14154906
 ] 

Hudson commented on HADOOP-11145:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1913 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1913/])
HADOOP-11145. TestFairCallQueue fails. Contributed by Akira AJISAKA. (cnauroth: 
rev b9158697a4f2d345b681a9b6ed982dae558338bc)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestFairCallQueue fails
> ---
>
> Key: HADOOP-11145
> URL: https://issues.apache.org/jira/browse/HADOOP-11145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 2.6.0
>
> Attachments: HADOOP-11145.2.patch, HADOOP-11145.patch, 
> HADOOP-11145.patch, org.apache.hadoop.ipc.TestFairCallQueue-output.txt
>
>
> TestFairCallQueue#testPutBlocksWhenAllFull fails on trunk and branch-2.
> {code}
> Running org.apache.hadoop.ipc.TestFairCallQueue
> Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.174 sec 
> <<< FAILURE! - in org.apache.hadoop.ipc.TestFairCallQueue
> testPutBlocksWhenAllFull(org.apache.hadoop.ipc.TestFairCallQueue)  Time 
> elapsed: 0.239 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<10> but was:<0>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.ipc.TestFairCallQueue.assertCanPut(TestFairCallQueue.java:337)
>   at 
> org.apache.hadoop.ipc.TestFairCallQueue.testPutBlocksWhenAllFull(TestFairCallQueue.java:353)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11156) DelegateToFileSystem should implement getFsStatus(final Path f).

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154905#comment-14154905
 ] 

Hudson commented on HADOOP-11156:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1913 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1913/])
HADOOP-11156. DelegateToFileSystem should implement getFsStatus(final Path f). 
Contributed by Zhihai Xu. (wang: rev d7075ada5d3019a8c520d34bfddb0cd73a449343)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java


> DelegateToFileSystem should implement getFsStatus(final Path f).
> 
>
> Key: HADOOP-11156
> URL: https://issues.apache.org/jira/browse/HADOOP-11156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11156.000.patch
>
>
> DelegateToFileSystem only implemented getFsStatus() and didn't implement 
> getFsStatus(final Path f). So if you call getFsStatus(final Path f), it will 
> call  AbstractFileSystem.getFsStatus(final Path f) which will also call 
> DelegateToFileSystem.getFsStatus(). It should implement getFsStatus(final 
> Path f) to call fsImpl.getStatus(f) instead of calling fsImpl.getStatus() 
> from getFsStatus().
> Also based on the following description for FileContext.getFsStatus:
> {code} 
> /**
>* Returns a status object describing the use and capacity of the
>* file system denoted by the Parh argument p.
>* If the file system has multiple partitions, the
>* use and capacity of the partition pointed to by the specified
>* path is reflected.
>* 
>* @param f Path for which status should be obtained. null means the
>* root partition of the default file system. 
>*
>* @return a FsStatus object
>*
>* @throws AccessControlException If access is denied
>* @throws FileNotFoundException If f does not exist
>* @throws UnsupportedFileSystemException If file system for f 
> is
>*   not supported
>* @throws IOException If an I/O error occurred
>* 
>* Exceptions applicable to file systems accessed over RPC:
>* @throws RpcClientException If an exception occurred in the RPC client
>* @throws RpcServerException If an exception occurred in the RPC server
>* @throws UnexpectedServerException If server implementation throws 
>*   undeclared exception to RPC server
>*/
>   public FsStatus getFsStatus(final Path f) throws AccessControlException,
>   FileNotFoundException, UnsupportedFileSystemException, IOException {
> if (f == null) {
>   return defaultFS.getFsStatus();
> }
> final Path absF = fixRelativePart(f);
> return new FSLinkResolver() {
>   @Override
>   public FsStatus next(final AbstractFileSystem fs, final Path p) 
> throws IOException, UnresolvedLinkException {
> return fs.getFsStatus(p);
>   }
> }.resolve(this, absF);
>   }
> {code}
> we should differentiate getFsStatus(final Path f) from getFsStatus() in 
> DelegateToFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11154) Update BUILDING.txt to state that CMake 3.0 or newer is required on Mac.

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154904#comment-14154904
 ] 

Hudson commented on HADOOP-11154:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1913 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1913/])
HADOOP-11154. Update BUILDING.txt to state that CMake 3.0 or newer is required 
on Mac. Contributed by Chris Nauroth. (cnauroth: rev 
8dc4e9408f4cd9a50cd58aee574f3b03c3a33b31)
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt


> Update BUILDING.txt to state that CMake 3.0 or newer is required on Mac.
> 
>
> Key: HADOOP-11154
> URL: https://issues.apache.org/jira/browse/HADOOP-11154
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-11154.1.patch
>
>
> The native code can be built on Mac now, but CMake 3.0 or newer is required.  
> This differs from our minimum stated version of 2.6 in BUILDING.txt.  I'd 
> like to update BUILDING.txt to state that 3.0 or newer is required if 
> building on Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154911#comment-14154911
 ] 

Hudson commented on HADOOP-7:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1913 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1913/])
HADOOP-7 UGI HadoopLoginModule doesn't catch & wrap all kerberos-related 
exceptions (stevel) (stevel: rev a469833639c7a5ef525a108a1ac70213881e627d)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/User.java


> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11113) Namenode not able to reconnect to KMS after KMS restart

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154908#comment-14154908
 ] 

Hudson commented on HADOOP-3:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1913 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1913/])
HADOOP-3. Namenode not able to reconnect to KMS after KMS restart. (Arun 
Suresh via wang) (wang: rev a4c9b80a7c2b30404840f39f2f46646479914345)
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> Namenode not able to reconnect to KMS after KMS restart
> ---
>
> Key: HADOOP-3
> URL: https://issues.apache.org/jira/browse/HADOOP-3
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-3.1.patch, HADOOP-3.2.patch, 
> HADOOP-3.3.patch
>
>
> It is observed that if KMS is restarted without the Namenode being restarted, 
> NN will not be able to reconnect with KMS.
> It seems that the KMS auth cookie goes stale and it does not get flushed, so 
> the KMSClient in the NN cannot reconnect with the new KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11145) TestFairCallQueue fails

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154838#comment-14154838
 ] 

Hudson commented on HADOOP-11145:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1888/])
HADOOP-11145. TestFairCallQueue fails. Contributed by Akira AJISAKA. (cnauroth: 
rev b9158697a4f2d345b681a9b6ed982dae558338bc)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestFairCallQueue fails
> ---
>
> Key: HADOOP-11145
> URL: https://issues.apache.org/jira/browse/HADOOP-11145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 2.6.0
>
> Attachments: HADOOP-11145.2.patch, HADOOP-11145.patch, 
> HADOOP-11145.patch, org.apache.hadoop.ipc.TestFairCallQueue-output.txt
>
>
> TestFairCallQueue#testPutBlocksWhenAllFull fails on trunk and branch-2.
> {code}
> Running org.apache.hadoop.ipc.TestFairCallQueue
> Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.174 sec 
> <<< FAILURE! - in org.apache.hadoop.ipc.TestFairCallQueue
> testPutBlocksWhenAllFull(org.apache.hadoop.ipc.TestFairCallQueue)  Time 
> elapsed: 0.239 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<10> but was:<0>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.ipc.TestFairCallQueue.assertCanPut(TestFairCallQueue.java:337)
>   at 
> org.apache.hadoop.ipc.TestFairCallQueue.testPutBlocksWhenAllFull(TestFairCallQueue.java:353)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11113) Namenode not able to reconnect to KMS after KMS restart

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154840#comment-14154840
 ] 

Hudson commented on HADOOP-3:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1888/])
HADOOP-3. Namenode not able to reconnect to KMS after KMS restart. (Arun 
Suresh via wang) (wang: rev a4c9b80a7c2b30404840f39f2f46646479914345)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> Namenode not able to reconnect to KMS after KMS restart
> ---
>
> Key: HADOOP-3
> URL: https://issues.apache.org/jira/browse/HADOOP-3
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-3.1.patch, HADOOP-3.2.patch, 
> HADOOP-3.3.patch
>
>
> It is observed that if KMS is restarted without the Namenode being restarted, 
> NN will not be able to reconnect with KMS.
> It seems that the KMS auth cookie goes stale and it does not get flushed, so 
> the KMSClient in the NN cannot reconnect with the new KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11154) Update BUILDING.txt to state that CMake 3.0 or newer is required on Mac.

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154835#comment-14154835
 ] 

Hudson commented on HADOOP-11154:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1888/])
HADOOP-11154. Update BUILDING.txt to state that CMake 3.0 or newer is required 
on Mac. Contributed by Chris Nauroth. (cnauroth: rev 
8dc4e9408f4cd9a50cd58aee574f3b03c3a33b31)
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt


> Update BUILDING.txt to state that CMake 3.0 or newer is required on Mac.
> 
>
> Key: HADOOP-11154
> URL: https://issues.apache.org/jira/browse/HADOOP-11154
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-11154.1.patch
>
>
> The native code can be built on Mac now, but CMake 3.0 or newer is required.  
> This differs from our minimum stated version of 2.6 in BUILDING.txt.  I'd 
> like to update BUILDING.txt to state that 3.0 or newer is required if 
> building on Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154843#comment-14154843
 ] 

Hudson commented on HADOOP-7:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1888/])
HADOOP-7 UGI HadoopLoginModule doesn't catch & wrap all kerberos-related 
exceptions (stevel) (stevel: rev a469833639c7a5ef525a108a1ac70213881e627d)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/User.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11156) DelegateToFileSystem should implement getFsStatus(final Path f).

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154837#comment-14154837
 ] 

Hudson commented on HADOOP-11156:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1888 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1888/])
HADOOP-11156. DelegateToFileSystem should implement getFsStatus(final Path f). 
Contributed by Zhihai Xu. (wang: rev d7075ada5d3019a8c520d34bfddb0cd73a449343)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> DelegateToFileSystem should implement getFsStatus(final Path f).
> 
>
> Key: HADOOP-11156
> URL: https://issues.apache.org/jira/browse/HADOOP-11156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11156.000.patch
>
>
> DelegateToFileSystem only implemented getFsStatus() and didn't implement 
> getFsStatus(final Path f). So if you call getFsStatus(final Path f), it will 
> call  AbstractFileSystem.getFsStatus(final Path f) which will also call 
> DelegateToFileSystem.getFsStatus(). It should implement getFsStatus(final 
> Path f) to call fsImpl.getStatus(f) instead of calling fsImpl.getStatus() 
> from getFsStatus().
> Also based on the following description for FileContext.getFsStatus:
> {code} 
> /**
>* Returns a status object describing the use and capacity of the
>* file system denoted by the Parh argument p.
>* If the file system has multiple partitions, the
>* use and capacity of the partition pointed to by the specified
>* path is reflected.
>* 
>* @param f Path for which status should be obtained. null means the
>* root partition of the default file system. 
>*
>* @return a FsStatus object
>*
>* @throws AccessControlException If access is denied
>* @throws FileNotFoundException If f does not exist
>* @throws UnsupportedFileSystemException If file system for f 
> is
>*   not supported
>* @throws IOException If an I/O error occurred
>* 
>* Exceptions applicable to file systems accessed over RPC:
>* @throws RpcClientException If an exception occurred in the RPC client
>* @throws RpcServerException If an exception occurred in the RPC server
>* @throws UnexpectedServerException If server implementation throws 
>*   undeclared exception to RPC server
>*/
>   public FsStatus getFsStatus(final Path f) throws AccessControlException,
>   FileNotFoundException, UnsupportedFileSystemException, IOException {
> if (f == null) {
>   return defaultFS.getFsStatus();
> }
> final Path absF = fixRelativePart(f);
> return new FSLinkResolver() {
>   @Override
>   public FsStatus next(final AbstractFileSystem fs, final Path p) 
> throws IOException, UnresolvedLinkException {
> return fs.getFsStatus(p);
>   }
> }.resolve(this, absF);
>   }
> {code}
> we should differentiate getFsStatus(final Path f) from getFsStatus() in 
> DelegateToFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154723#comment-14154723
 ] 

Arun Suresh commented on HADOOP-11151:
--

The test failures are un-related

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154702#comment-14154702
 ] 

Hadoop QA commented on HADOOP-11151:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12672302/HADOOP-11151.2.patch
  against trunk revision 17d1202.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

  org.apache.hadoop.crypto.random.TestOsSecureRandom
  org.apache.hadoop.ha.TestZKFailoverControllerStress

  The test build failed in 
hadoop-common-project/hadoop-kms 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4845//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4845//console

This message is automatically generated.

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atl

[jira] [Commented] (HADOOP-11154) Update BUILDING.txt to state that CMake 3.0 or newer is required on Mac.

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154681#comment-14154681
 ] 

Hudson commented on HADOOP-11154:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #697 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/697/])
HADOOP-11154. Update BUILDING.txt to state that CMake 3.0 or newer is required 
on Mac. Contributed by Chris Nauroth. (cnauroth: rev 
8dc4e9408f4cd9a50cd58aee574f3b03c3a33b31)
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt


> Update BUILDING.txt to state that CMake 3.0 or newer is required on Mac.
> 
>
> Key: HADOOP-11154
> URL: https://issues.apache.org/jira/browse/HADOOP-11154
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Fix For: 2.6.0
>
> Attachments: HADOOP-11154.1.patch
>
>
> The native code can be built on Mac now, but CMake 3.0 or newer is required.  
> This differs from our minimum stated version of 2.6 in BUILDING.txt.  I'd 
> like to update BUILDING.txt to state that 3.0 or newer is required if 
> building on Mac.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11117) UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154688#comment-14154688
 ] 

Hudson commented on HADOOP-7:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #697 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/697/])
HADOOP-7 UGI HadoopLoginModule doesn't catch & wrap all kerberos-related 
exceptions (stevel) (stevel: rev a469833639c7a5ef525a108a1ac70213881e627d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/User.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> UGI HadoopLoginModule doesn't catch & wrap all kerberos-related exceptions
> --
>
> Key: HADOOP-7
> URL: https://issues.apache.org/jira/browse/HADOOP-7
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-7-001.patch, HADOOP-7-002.patch
>
>
> If something is failing with kerberos login, 
> {{UserGroupInformation.loginUserFromKeytabAndReturnUGI()}} should fail with 
> useful information. But not all exceptions from the inner code are caught and 
> converted to LoginException. Those exceptions that aren't wrapped have their 
> text and stack trace lost somewhere in the javax code, leaving on the text 
> "login failed" and a stack trace of no value whatsoever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11113) Namenode not able to reconnect to KMS after KMS restart

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154685#comment-14154685
 ] 

Hudson commented on HADOOP-3:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #697 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/697/])
HADOOP-3. Namenode not able to reconnect to KMS after KMS restart. (Arun 
Suresh via wang) (wang: rev a4c9b80a7c2b30404840f39f2f46646479914345)
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/MiniKMS.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Namenode not able to reconnect to KMS after KMS restart
> ---
>
> Key: HADOOP-3
> URL: https://issues.apache.org/jira/browse/HADOOP-3
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Fix For: 2.6.0
>
> Attachments: HADOOP-3.1.patch, HADOOP-3.2.patch, 
> HADOOP-3.3.patch
>
>
> It is observed that if KMS is restarted without the Namenode being restarted, 
> NN will not be able to reconnect with KMS.
> It seems that the KMS auth cookie goes stale and it does not get flushed, so 
> the KMSClient in the NN cannot reconnect with the new KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11156) DelegateToFileSystem should implement getFsStatus(final Path f).

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154682#comment-14154682
 ] 

Hudson commented on HADOOP-11156:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #697 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/697/])
HADOOP-11156. DelegateToFileSystem should implement getFsStatus(final Path f). 
Contributed by Zhihai Xu. (wang: rev d7075ada5d3019a8c520d34bfddb0cd73a449343)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> DelegateToFileSystem should implement getFsStatus(final Path f).
> 
>
> Key: HADOOP-11156
> URL: https://issues.apache.org/jira/browse/HADOOP-11156
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.7.0
>
> Attachments: HADOOP-11156.000.patch
>
>
> DelegateToFileSystem only implemented getFsStatus() and didn't implement 
> getFsStatus(final Path f). So if you call getFsStatus(final Path f), it will 
> call  AbstractFileSystem.getFsStatus(final Path f) which will also call 
> DelegateToFileSystem.getFsStatus(). It should implement getFsStatus(final 
> Path f) to call fsImpl.getStatus(f) instead of calling fsImpl.getStatus() 
> from getFsStatus().
> Also based on the following description for FileContext.getFsStatus:
> {code} 
> /**
>* Returns a status object describing the use and capacity of the
>* file system denoted by the Parh argument p.
>* If the file system has multiple partitions, the
>* use and capacity of the partition pointed to by the specified
>* path is reflected.
>* 
>* @param f Path for which status should be obtained. null means the
>* root partition of the default file system. 
>*
>* @return a FsStatus object
>*
>* @throws AccessControlException If access is denied
>* @throws FileNotFoundException If f does not exist
>* @throws UnsupportedFileSystemException If file system for f 
> is
>*   not supported
>* @throws IOException If an I/O error occurred
>* 
>* Exceptions applicable to file systems accessed over RPC:
>* @throws RpcClientException If an exception occurred in the RPC client
>* @throws RpcServerException If an exception occurred in the RPC server
>* @throws UnexpectedServerException If server implementation throws 
>*   undeclared exception to RPC server
>*/
>   public FsStatus getFsStatus(final Path f) throws AccessControlException,
>   FileNotFoundException, UnsupportedFileSystemException, IOException {
> if (f == null) {
>   return defaultFS.getFsStatus();
> }
> final Path absF = fixRelativePart(f);
> return new FSLinkResolver() {
>   @Override
>   public FsStatus next(final AbstractFileSystem fs, final Path p) 
> throws IOException, UnresolvedLinkException {
> return fs.getFsStatus(p);
>   }
> }.resolve(this, absF);
>   }
> {code}
> we should differentiate getFsStatus(final Path f) from getFsStatus() in 
> DelegateToFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11145) TestFairCallQueue fails

2014-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14154683#comment-14154683
 ] 

Hudson commented on HADOOP-11145:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #697 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/697/])
HADOOP-11145. TestFairCallQueue fails. Contributed by Akira AJISAKA. (cnauroth: 
rev b9158697a4f2d345b681a9b6ed982dae558338bc)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestFairCallQueue.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> TestFairCallQueue fails
> ---
>
> Key: HADOOP-11145
> URL: https://issues.apache.org/jira/browse/HADOOP-11145
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Fix For: 2.6.0
>
> Attachments: HADOOP-11145.2.patch, HADOOP-11145.patch, 
> HADOOP-11145.patch, org.apache.hadoop.ipc.TestFairCallQueue-output.txt
>
>
> TestFairCallQueue#testPutBlocksWhenAllFull fails on trunk and branch-2.
> {code}
> Running org.apache.hadoop.ipc.TestFairCallQueue
> Tests run: 22, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.174 sec 
> <<< FAILURE! - in org.apache.hadoop.ipc.TestFairCallQueue
> testPutBlocksWhenAllFull(org.apache.hadoop.ipc.TestFairCallQueue)  Time 
> elapsed: 0.239 sec  <<< FAILURE!
> junit.framework.AssertionFailedError: expected:<10> but was:<0>
>   at junit.framework.Assert.fail(Assert.java:57)
>   at junit.framework.Assert.failNotEquals(Assert.java:329)
>   at junit.framework.Assert.assertEquals(Assert.java:78)
>   at junit.framework.Assert.assertEquals(Assert.java:234)
>   at junit.framework.Assert.assertEquals(Assert.java:241)
>   at junit.framework.TestCase.assertEquals(TestCase.java:409)
>   at 
> org.apache.hadoop.ipc.TestFairCallQueue.assertCanPut(TestFairCallQueue.java:337)
>   at 
> org.apache.hadoop.ipc.TestFairCallQueue.testPutBlocksWhenAllFull(TestFairCallQueue.java:353)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11151) failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after one day running

2014-10-01 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11151:
-
Attachment: HADOOP-11151.2.patch

Fixing test case..

> failed to create (put, copyFromLocal, cp, etc.) file in encryption zone after 
> one day running
> -
>
> Key: HADOOP-11151
> URL: https://issues.apache.org/jira/browse/HADOOP-11151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: zhubin
>Assignee: Arun Suresh
> Attachments: HADOOP-11151.1.patch, HADOOP-11151.2.patch
>
>
> Enable CFS and KMS service in the cluster, initially it worked to put/copy 
> file into encryption zone. But after a while (might be one day), it fails to 
> put/copy file into the encryption zone with the error
> java.util.concurrent.ExecutionException: java.io.IOException: HTTP status 
> [403], message [Forbidden]
> The kms.log shows below
> AbstractDelegationTokenSecretManager - Updating the current master key for 
> generating delegation tokens
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - AuthenticationToken 
> ignored: org.apache.hadoop.security.authentication.util.SignerException: 
> Invalid signature
> 2014-09-29 13:18:46,599 WARN  AuthenticationFilter - Authentication 
> exception: Anonymous requests are disallowed
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Anonymous requests are disallowed
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:184)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:331)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:507)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> at 
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> at 
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:861)
> at 
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:606)
> at 
> org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)