[jira] [Commented] (HADOOP-14020) Optimize dirListingUnion

2017-01-25 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839362#comment-15839362
 ] 

Aaron Fabbri commented on HADOOP-14020:
---

Thanks for doing this [~mackrorysd].  Looks really good. Simple logic but huge 
improvement in behavior.

Also impressed at your effort writing a test case for this.  I would have been 
tempted to confirm behavior with log messages then call it good. 

Minor optimizations suggested below...

{code}
+  public boolean put(FileStatus childFileStatus) {
 Preconditions.checkNotNull(childFileStatus,
 "childFileStatus must be non-null");
 Path childPath = childStatusToPathKey(childFileStatus);
+PathMetadata oldValue = listMap.get(childPath);
+PathMetadata newValue = new PathMetadata(childFileStatus);
+if (oldValue != null && oldValue.equals(newValue)) {
+  return false;
+}
 listMap.put(childPath, new PathMetadata(childFileStatus));
{code}
How about listMap.put(childPath, newValue) here.  Save a little garbage.

{code}
   public static FileStatus[] dirListingUnion(MetadataStore ms, Path path,
-  List backingStatuses, DirListingMetadata dirMeta)
-  throws IOException {
+List backingStatuses, DirListingMetadata dirMeta,
+  Configuration conf) throws IOException {
{code}
Looks like you only use conf to check the value of 
Constants.METADATASTORE_AUTHORITATIVE.

Can we just pass in a boolean isMetadataStoreAuthoritative instead of doing a 
conf.get() each time?  More efficient (and explicit).



> Optimize dirListingUnion
> 
>
> Key: HADOOP-14020
> URL: https://issues.apache.org/jira/browse/HADOOP-14020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14020-HADOOP-13345.001.patch, 
> HADOOP-14020-HADOOP-13345.002.patch
>
>
> There's a TODO in dirListingUnion:
> {quote}// TODO optimize for when allowAuthoritative = false{quote}
> There will be cases when we can intelligently avoid a round trip: if S3A 
> results are a subset or the metadatastore results (including them being equal 
> or empty) then writing back will do nothing (although perhaps that should set 
> the authoritative flag if it isn't set already).
> There may also be cases where users want to just skip that altogether. It's 
> wasted work if authoritative mode is disabled, so perhaps we want to trigger 
> a skip if that's false, or perhaps it should be a separate property. First 
> one makes for simpler config, second is more flexible...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13936) S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation

2017-01-25 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839356#comment-15839356
 ] 

Aaron Fabbri commented on HADOOP-13936:
---

This is part of a general class of problems.  In general, since we do not have 
transactions around updating the two sources of truth, things can get out of 
sync when failures happen.

Transactions are probably not worth the cost here.  The biggest issue IMO is 
the fact that we're running in the FS client.  You really want a long-lived 
daemon with durable storage to implement transactions.  Another significant 
issue is the runtime cost in terms of more round trips.

The current approach is to detect problems when they occur and give folks tools 
to repair them.  Simply wiping the MetadataStore is one easy way to fix 
problems.  Another more automatic, but still simple approach, that I plan on 
implementing in the future, is to set configurable TTL on entries in the 
MetadataStore.  We will contribute a patch soon which adds a "expire" or "trim" 
command to the CLI, which will delete any entries older than X hours.  We could 
also add a config knob to the client which ignores entries past a certain TTL.  
  That doesn't immediately fix inconsistencies, but does cause them to 
automatically heal after some time.

> S3Guard: DynamoDB can go out of sync with S3AFileSystem::delete operation
> -
>
> Key: HADOOP-13936
> URL: https://issues.apache.org/jira/browse/HADOOP-13936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> As a part of {{S3AFileSystem.delete}} operation {{innerDelete}} is invoked, 
> which deletes keys from S3 in batches (default is 1000). But DynamoDB is 
> updated only at the end of this operation. This can cause issues when 
> deleting large number of keys. 
> E.g, it is possible to get exception after deleting 1000 keys and in such 
> cases dynamoDB would not be updated. This can cause DynamoDB to go out of 
> sync. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14025) Enhance AccessControlList to reject empty ugi#getGroups

2017-01-25 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-14025:
--

 Summary: Enhance AccessControlList to reject empty ugi#getGroups
 Key: HADOOP-14025
 URL: https://issues.apache.org/jira/browse/HADOOP-14025
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Xiao Chen
Assignee: Xiao Chen


It would be good to reconsider empty groups when checking user access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839245#comment-15839245
 ] 

Xiao Chen commented on HADOOP-14003:


Good catch Andrew. Patch 3 LGTM.

jenkins -1 is due to YARN-5641, pinged that jira. +1 pending a new jenkins once 
that's fixed.



> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch, hadoop-14003-branch-2.001.patch, 
> HADOOP-14003.branch-2.002.patch, HADOOP-14003.branch-2.003.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14024) KMS JMX endpoint throws ClassNotFoundException

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839063#comment-15839063
 ] 

Xiao Chen commented on HADOOP-14024:


bq. This looks the same as HADOOP-13872.
Not exactly, because that jira allegedly only affecting 3.0, and with 
HADOOP-13597 that's be taken care of.
Here the problem is on branch-2, so needs investigation.

> KMS JMX endpoint throws ClassNotFoundException
> --
>
> Key: HADOOP-14024
> URL: https://issues.apache.org/jira/browse/HADOOP-14024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Priority: Critical
>
> Throws like this:
> {noformat}
> root cause java.lang.ClassNotFoundException: 
> org.mortbay.jetty.servlet.Context
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:174)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> {noformat}
> I tried out branch-2.6 and it seems to be okay, so something changed between 
> 2.6.x and 2.8.x/branch-2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14018) hadoop-client jars are missing hadoop's root LICENSE and NOTICE files

2017-01-25 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14018:
---
Summary: hadoop-client jars are missing hadoop's root LICENSE and NOTICE 
files  (was: hadoop-client-api-3.0.0-alpha2.jar misses LICENSE file)

> hadoop-client jars are missing hadoop's root LICENSE and NOTICE files
> -
>
> Key: HADOOP-14018
> URL: https://issues.apache.org/jira/browse/HADOOP-14018
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HADOOP-14018.001.patch
>
>
> In the 3.0.0-alpha2-RC0, hadoop-client-api-3.0.0-alpha2.jar misses LICENSE 
> file, but it does have {{META-INF/NOTICE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839021#comment-15839021
 ] 

Steve Moist commented on HADOOP-13075:
--

Updated pull request with [~steve_l]'s comments.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15839014#comment-15839014
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user orngejaket commented on the issue:

https://github.com/apache/hadoop/pull/183
  
Accidently included others commits due to trying to resolve unpushable 
commit.  Update to the pull request is in 
e7303f9eff316d3d4329de9011cfd4348ac97a23.  Will correct this tomorrow and 
resubmit cleaner patch.


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11794) distcp can copy blocks in parallel

2017-01-25 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838988#comment-15838988
 ] 

Aaron T. Myers commented on HADOOP-11794:
-

Latest patch looks pretty good to me. Just a few small comments from me:

# "randomdize" -> "randomize": {{// When splitLargeFile is enabled, we don't 
randomdize the copylist}}
# In two places you have basically "if (LOG.isDebugEnabled) { LOG.warn(...); }" 
You should do {{LOG.debug(...)}} in these places, and perhaps also make these 
debug messages a little more helpful instead of just "add1", which would 
require someone to read the source code to understand.
# I think this log message is a little misleading:
{code}
+  CHUNK_SIZE("",
+  new Option("chunksize", true, "Size of chunk in number of blocks when " +
+  "splitting large files into chunks to copy in parallel")),
{code}
Assuming I'm reading the code correctly, the way a file is determined to be 
"large" in this context is just if it has more blocks than the configured chunk 
size. This log message also seems to imply that there might be some other 
configuration option to enable/disable splitting large files at all. I think 
better text would be something like "If set to a positive value, files with 
more blocks than this value will be split at their block boundaries during 
transfer, and reassembled on the destination cluster. By default, files will be 
transmitted in their entirety without splitting."
# Rather than suppressing the checkstyle warnings, recommend implementing the 
builder pattern for the {{CopyListingFileStatus}} constructors. That should 
make things quite a bit clearer.
# There are a handful of lines that are changed that I think are just 
whitespace, but not a big deal.

> distcp can copy blocks in parallel
> --
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 0.21.0
>Reporter: dhruba borthakur
>Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13374) Add the L verification script

2017-01-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838965#comment-15838965
 ] 

John Zhuge commented on HADOOP-13374:
-

Patch 04 looks great. Thanks [~aw]!

Here is the output for 3.0.0-alpha2-RC0:
{noformat}
$ bash dev-support/bin/verify-license-files
ERROR: hadoop-client-api-3.0.0-alpha2.jar: Missing a LICENSE file
ERROR: hadoop-client-api-3.0.0-alpha2.jar: No valid NOTICE found

WARNING: hadoop-client-minicluster-3.0.0-alpha2.jar: Found 5 LICENSE files (0 
were valid)
ERROR: hadoop-client-minicluster-3.0.0-alpha2.jar: No valid LICENSE found
WARNING: hadoop-client-minicluster-3.0.0-alpha2.jar: Found 3 NOTICE files (0 
were valid)
ERROR: hadoop-client-minicluster-3.0.0-alpha2.jar: No valid NOTICE found

ERROR: hadoop-client-runtime-3.0.0-alpha2.jar: No valid LICENSE found
ERROR: hadoop-client-runtime-3.0.0-alpha2.jar: No valid NOTICE found
{noformat}

As for [~xiaochen]'s suggestion on full path, how about this format:
{noformat}
$ bash dev-support/bin/verify-license-files
hadoop-3.0.0-alpha2/share/hadoop/client/hadoop-client-api-3.0.0-alpha2.jar
ERROR: Missing a LICENSE file
ERROR: No valid NOTICE found

hadoop-3.0.0-alpha2/share/hadoop/client/hadoop-client-minicluster-3.0.0-alpha2.jar
WARNING: Found 5 LICENSE files (0 were valid)
ERROR: No valid LICENSE found
WARNING: Found 3 NOTICE files (0 were valid)
ERROR: No valid NOTICE found

hadoop-3.0.0-alpha2/share/hadoop/client/hadoop-client-runtime-3.0.0-alpha2.jar
ERROR: No valid LICENSE found
ERROR: No valid NOTICE found
{noformat}

> Add the L verification script
> ---
>
> Key: HADOOP-13374
> URL: https://issues.apache.org/jira/browse/HADOOP-13374
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiao Chen
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13374.01.patch, HADOOP-13374.02.patch, 
> HADOOP-13374.03.patch, HADOOP-13374.04.patch
>
>
> This is the script that's used for L change verification during 
> HADOOP-12893. We should commit this as [~ozawa] 
> [suggested|https://issues.apache.org/jira/browse/HADOOP-13298?focusedCommentId=15374498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15374498].
> I was 
> [initially|https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15283040=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15283040]
>  verifying from an on-fly shell command, and [~andrew.wang] contributed the 
> script later in [a comment|
> https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15303281=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15303281],
>  so most credit should go to him. :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14024) KMS JMX endpoint throws ClassNotFoundException

2017-01-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838954#comment-15838954
 ] 

John Zhuge commented on HADOOP-14024:
-

This looks the same as HADOOP-13872.

> KMS JMX endpoint throws ClassNotFoundException
> --
>
> Key: HADOOP-14024
> URL: https://issues.apache.org/jira/browse/HADOOP-14024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Priority: Critical
>
> Throws like this:
> {noformat}
> root cause java.lang.ClassNotFoundException: 
> org.mortbay.jetty.servlet.Context
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:174)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> {noformat}
> I tried out branch-2.6 and it seems to be okay, so something changed between 
> 2.6.x and 2.8.x/branch-2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838952#comment-15838952
 ] 

Hadoop QA commented on HADOOP-14003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
30s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
32s{color} | {color:red} root in branch-2 failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
47s{color} | {color:red} root in the patch failed with JDK v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 47s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 8s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_121. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-14003 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849383/HADOOP-14003.branch-2.003.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  xml  |
| uname | Linux 2df145db0105 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / b799ea7 |
| Default Java | 1.7.0_121 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_121 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-25 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838925#comment-15838925
 ] 

Greg Senia commented on HADOOP-13988:
-

[~xyao] and [~lmccay] thanks for all the help with this issue. I appreciate you 
guys digging in and helping get the right fix built.

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838924#comment-15838924
 ] 

Hudson commented on HADOOP-13989:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11176 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11176/])
HADOOP-13989. Remove erroneous source jar option from hadoop-client (wang: rev 
cd59b9ccab51376310484a6e3d9179bb52fccae1)
* (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml
* (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml


> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch, 
> HADOOP-13989.003.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838914#comment-15838914
 ] 

Ted Yu commented on HADOOP-13866:
-

There has not been discussion in hbase community on the exact release of hadoop 
on which 2.0 release is based on.

I can build branch-2 (after this goes in) and run tests from hbase master 
branch on that build.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-13866:

Target Version/s: 2.9.0, 3.0.0-beta1  (was: 3.0.0-beta1)

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838909#comment-15838909
 ] 

Xiao Chen edited comment on HADOOP-13866 at 1/26/17 12:43 AM:
--

Thanks, so this jira is also targeting to 2.x? Could you update the target 
version field accordingly? branch-2 maps to 2.9.

What is the minimum hadoop version hbase 2.0 supports?


was (Author: xiaochen):
Thanks, so this jira is also targeting to 2.x? Could you update the target 
version field accordingly? branch-2 maps to 2.9.

What is the minimum version hbase 2.0 supports?

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838909#comment-15838909
 ] 

Xiao Chen commented on HADOOP-13866:


Thanks, so this jira is also targeting to 2.x? Could you update the target 
version field accordingly? branch-2 maps to 2.9.

What is the minimum version hbase 2.0 supports?

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838897#comment-15838897
 ] 

Ted Yu commented on HADOOP-13866:
-

Reverting from branch-2 would make it close to trunk.

Since this JIRA blocks hbase 2.0 release, I hope this can go to some release of 
hadoop 2.x

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838879#comment-15838879
 ] 

Xiao Chen commented on HADOOP-13866:


Thanks all for the comment.

So to summarize: let's revert HDFS-8377 from trunk, to ease this jira. Plan to 
revert from branch-2 to make it closer to trunk.
branch-2.8 will be left as-is.

[~tedyu], this jira is only targeting for 3.0 right? So we don't need to worry 
about bringing this to branch-2 (specifically branch-2.8)

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838875#comment-15838875
 ] 

Duo Zhang commented on HADOOP-13866:


+1 on reverting.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838866#comment-15838866
 ] 

Junping Du commented on HADOOP-13866:
-

It is too late for 2.8.0 which is in RC stage and get frozen to any 
commits/reverts.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13989) Remove erroneous source jar option from hadoop-client shade configuration

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13989:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks Joe for the patch, Sean for reviewing!

> Remove erroneous source jar option from hadoop-client shade configuration
> -
>
> Key: HADOOP-13989
> URL: https://issues.apache.org/jira/browse/HADOOP-13989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha2
>Reporter: Joe Pallas
>Assignee: Joe Pallas
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13989.001.patch, HADOOP-13989.002.patch, 
> HADOOP-13989.003.patch
>
>
> The pom files for hadoop-client-minicluster and hadoop-client-runtime have a 
> typo in the configuration of the shade module.  They say 
> {{}} instead of {{}}.  (This was noticed 
> by IntelliJ, but not by maven.)
> Shade plugin doc is at 
> [http://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#createSourcesJar].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838827#comment-15838827
 ] 

Ted Yu commented on HADOOP-13866:
-

[~Apache9]:
See the above discussion.

Your opinion is appreciated.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14024) KMS JMX endpoint throws ClassNotFoundException

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14024:
-
Priority: Critical  (was: Major)

> KMS JMX endpoint throws ClassNotFoundException
> --
>
> Key: HADOOP-14024
> URL: https://issues.apache.org/jira/browse/HADOOP-14024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Priority: Critical
>
> Throws like this:
> {noformat}
> root cause java.lang.ClassNotFoundException: 
> org.mortbay.jetty.servlet.Context
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:174)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> {noformat}
> I tried out branch-2.6 and it seems to be okay, so something changed between 
> 2.6.x and 2.8.x/branch-2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838810#comment-15838810
 ] 

Ted Yu edited comment on HADOOP-13866 at 1/25/17 11:35 PM:
---

2.8.0 hasn't been released yet.

For 3.0, [~andrew.wang]: do you think taking out HDFS-8377 is an option ?

Edit:
I saw Andrew's comment after clicking Save button.


was (Author: yuzhih...@gmail.com):
2.8.0 hasn't been released yet.

For 3.0, [~andrew.wang]: do you think taking out HDFS-8377 is an option ?

Thanks

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838810#comment-15838810
 ] 

Ted Yu commented on HADOOP-13866:
-

2.8.0 hasn't been released yet.

For 3.0, [~andrew.wang]: do you think taking out HDFS-8377 is an option ?

Thanks

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2017-01-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13514:
---
Hadoop Flags:   (was: Reviewed)

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HADOOP-13514.002.patch, HADOOP-13514-addendum.01.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2017-01-25 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13514:
---
Target Version/s: 2.9.0, 3.0.0-alpha3
   Fix Version/s: (was: 3.0.0-alpha3)
  (was: 2.8.0)

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Attachments: HADOOP-13514.002.patch, HADOOP-13514-addendum.01.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838809#comment-15838809
 ] 

Andrew Wang commented on HADOOP-13866:
--

bq. It looks like it's already rolled out to 2.8.0 and 3.0.0-alpha1, so I'm not 
sure whether we can still revert it now...

I think it's fine to revert if this is causing us problems. That branch isn't 
actively being worked on AFAIK, and that JIRA doesn't have any user-visible 
changes so reverting is still compatible.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14024) KMS JMX endpoint throws ClassNotFoundException

2017-01-25 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838806#comment-15838806
 ] 

Andrew Wang commented on HADOOP-14024:
--

Works in branch-2.7 also, which narrows the diff.

> KMS JMX endpoint throws ClassNotFoundException
> --
>
> Key: HADOOP-14024
> URL: https://issues.apache.org/jira/browse/HADOOP-14024
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>
> Throws like this:
> {noformat}
> root cause java.lang.ClassNotFoundException: 
> org.mortbay.jetty.servlet.Context
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)
>   
> org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
>   org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:174)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>   javax.servlet.http.HttpServlet.service(HttpServlet.java:723)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)
>   
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)
>   
> org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
> {noformat}
> I tried out branch-2.6 and it seems to be okay, so something changed between 
> 2.6.x and 2.8.x/branch-2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14024) KMS JMX endpoint throws ClassNotFoundException

2017-01-25 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14024:


 Summary: KMS JMX endpoint throws ClassNotFoundException
 Key: HADOOP-14024
 URL: https://issues.apache.org/jira/browse/HADOOP-14024
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.8.0
Reporter: Andrew Wang


Throws like this:

{noformat}
root cause java.lang.ClassNotFoundException: 
org.mortbay.jetty.servlet.Context

org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1698)

org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1544)
org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:174)
javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
javax.servlet.http.HttpServlet.service(HttpServlet.java:723)

org.apache.hadoop.crypto.key.kms.server.KMSMDCFilter.doFilter(KMSMDCFilter.java:84)

org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)

org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:304)

org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:588)

org.apache.hadoop.crypto.key.kms.server.KMSAuthenticationFilter.doFilter(KMSAuthenticationFilter.java:129)
{noformat}

I tried out branch-2.6 and it seems to be okay, so something changed between 
2.6.x and 2.8.x/branch-2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838786#comment-15838786
 ] 

Haibo Chen commented on HADOOP-13866:
-

See [~xiaochen]'s comment above.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14003) Make additional KMS tomcat settings configurable

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14003:
-
Attachment: HADOOP-14003.branch-2.003.patch

Here's one more rev. I realized that non-1 values of acceptorThreadCount 
requires the NIO connector, so set the default to 1 and document this.

> Make additional KMS tomcat settings configurable
> 
>
> Key: HADOOP-14003
> URL: https://issues.apache.org/jira/browse/HADOOP-14003
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-14003.001.patch, hadoop-14003-branch-2.001.patch, 
> HADOOP-14003.branch-2.002.patch, HADOOP-14003.branch-2.003.patch
>
>
> Doing some Tomcat performance tuning on a loaded cluster, we found that 
> {{acceptCount}}, {{acceptorThreadCount}}, and {{protocol}} can be useful. 
> Let's make these configurable in the kms startup script.
> Since the KMS is Jetty in 3.x, this is targeted at just branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838777#comment-15838777
 ] 

Ted Yu commented on HADOOP-13866:
-

Where is the discussion about reverting HDFS-8377 ?

I looked at the tail of HDFS-8377 but there was no discussion there.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838772#comment-15838772
 ] 

Haibo Chen commented on HADOOP-13866:
-

bq. If I can construct Http2ConnectionHandler using above code, 
DtpHttp2FrameListener can be plugged in which should make 
TestDatanodeHttpXFrame pass.
Yes, this is what I was also thinking of. But let's wait for a while to see if 
HDFS-8377 can be reverted. It will make the change much easier.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14018) hadoop-client-api-3.0.0-alpha2.jar misses LICENSE file

2017-01-25 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838764#comment-15838764
 ] 

Xiao Chen commented on HADOOP-14018:


{noformat}
$ ./dev-support/bin/verify-license-files 
ERROR: hadoop-client-api-3.0.0-alpha3-SNAPSHOT.jar: No valid NOTICE found
WARNING: hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar: Found 6 LICENSE 
files (1 were valid)
WARNING: hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar: Found 3 NOTICE 
files (0 were valid)
ERROR: hadoop-client-minicluster-3.0.0-alpha3-SNAPSHOT.jar: No valid NOTICE 
found
WARNING: hadoop-client-runtime-3.0.0-alpha3-SNAPSHOT.jar: Found 2 LICENSE files 
(1 were valid)
ERROR: hadoop-client-runtime-3.0.0-alpha3-SNAPSHOT.jar: No valid NOTICE found
{noformat}

Thanks [~elek] for the patch and [~jzhuge] for reporting.
Appreciate the completeness to cover all hadoop-client modules although title 
only says hadoop-client-api.

Allen had an enhanced tool in HADOOP-13374 to check for L, above is the 
result after applying the fix here. Could you take a look? Thanks.

> hadoop-client-api-3.0.0-alpha2.jar misses LICENSE file
> --
>
> Key: HADOOP-14018
> URL: https://issues.apache.org/jira/browse/HADOOP-14018
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Elek, Marton
>Priority: Minor
> Attachments: HADOOP-14018.001.patch
>
>
> In the 3.0.0-alpha2-RC0, hadoop-client-api-3.0.0-alpha2.jar misses LICENSE 
> file, but it does have {{META-INF/NOTICE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838697#comment-15838697
 ] 

Hudson commented on HADOOP-13988:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11174 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11174/])
HADOOP-13988. KMSClientProvider does not work with WebHDFS and Apache (xyao: 
rev a46933e8ce4c1715c11e3e3283bf0e8c2b53b837)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java


> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13988:

Fix Version/s: 3.0.0-alpha3

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838686#comment-15838686
 ] 

Ted Yu commented on HADOOP-13866:
-

I ran TestPipelinesFailover and TestNameNodeMetadataConsistency with patch v6 
which passed.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14023) S3Guard: DynamoDBMetadataStore logs nonsense region

2017-01-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14023:
---
Attachment: HADOOP-14023-HADOOP-13345.001.patch

This has the benefit of actually logging the correct region with all documented 
endpoints, but it still relies on the structure of the endpoint URL, which I 
really don't like. Since the AWS SDK does not appear to provide a way to look 
up a region given a client, I modified the factory to allow you to look up a 
region based on the configuration (since the configuration is the only other 
option to create a client when not specifying the region yourself).

> S3Guard: DynamoDBMetadataStore logs nonsense region
> ---
>
> Key: HADOOP-14023
> URL: https://issues.apache.org/jira/browse/HADOOP-14023
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-13345
>Reporter: Sean Mackrory
>Priority: Minor
> Attachments: HADOOP-14023-HADOOP-13345.001.patch
>
>
> When you specify a DynamoDB endpoint instead of it using the same one as the 
> S3 bucket, it thinks the region is "dynamodb":
> {code}
> > bin/hadoop s3a init -m dynamodb://region-test
> 2017-01-25 11:41:28,006 INFO s3guard.S3GuardTool: create metadata store: 
> dynamodb://region-test scheme: dynamodb
> 2017-01-25 11:41:28,116 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-01-25 11:41:29,421 INFO s3guard.DynamoDBMetadataStore: Creating 
> non-existent DynamoDB table region-test in region dynamodb
> 2017-01-25 11:41:34,637 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=dynamodb, tableName=region-test} is initialized.
> {code}
> My guess is it's because of DynamoDBMetadataStore.initialize will call 
> getEndpointPrefix, and all the published endpoints for DynamoDB actually 
> start with dynamodb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13433:
-
Fix Version/s: 3.0.0-alpha3

Please remember to set a 3.0.0-X version when committing to trunk, thanks!

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
>  

[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838663#comment-15838663
 ] 

Ted Yu commented on HADOOP-13866:
-

In netty's Http2FrameCodec ctor :
{code}
Http2ConnectionDecoder decoder = new 
DefaultHttp2ConnectionDecoder(connection, encoder, reader);
decoder.frameListener(new FrameListener());
http2Handler = new InternalHttp2ConnectionHandler(decoder, encoder, 
initialSettings);
{code}
If I can construct Http2ConnectionHandler using above code, 
DtpHttp2FrameListener can be plugged in which should make 
TestDatanodeHttpXFrame pass.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838654#comment-15838654
 ] 

Steve Moist commented on HADOOP-13075:
--

[~fabbri] One of Frederico's original changes was removing that.  It was 
commented on saying not to do so (in branch-2). It would be a simpler change 
but I feel overloading that wouldn't be as clear.  I'm not sure what customers 
are using that field, the only use of it is in ITestS3AEncryption.  Since this 
is 3.0 of Hadoop, is it possible to remove old fields?  I'm more supportie of 
removing that field.

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838656#comment-15838656
 ] 

Hudson commented on HADOOP-13433:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11173/])
HADOOP-13433 Race in UGI.reloginFromKeytab. Contributed by Duo Zhang. (stevel: 
rev 7fc3e68a876132563aa2321519fc6941e37b2cae)
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestFixKerberosTicketOrder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* (add) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestRaceWhenRelogin.java
* (edit) hadoop-common-project/hadoop-common/pom.xml


> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> 

[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838640#comment-15838640
 ] 

Aaron Fabbri commented on HADOOP-13075:
---

Thanks for doing this [~moist].  One quick question, would it be simpler to not 
add the new enum + deprecation, and instead just add new constants here?
{code}
+   * Use the S3AEncryptionMethods instead when configuring
 +   * which Server Side Encryption to use.
 */
 +  @Deprecated
public static final String SERVER_SIDE_ENCRYPTION_AES256 =
"AES256";
{code}

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-25 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-13988:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~gss2002] for the contribution and all for the discussion/reviews. I've 
commit the fix to trunk and branch-2.

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Fix For: 2.9.0
>
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13433:

   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to  branch-2 & trunk.

thanks!

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at 

[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-25 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838614#comment-15838614
 ] 

Larry McCay commented on HADOOP-13988:
--

Okay - I understand now.
Even though the Knox usecase doesn't present a KMS delegation token as part of 
the request, other uses of KMSClientProvider will.
Usecases such as Yarn acquiring the KMS-DT to provide for use with a MR job 
need to be accommodated.

Here is my +1.

Thanks, [~xyao]!

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13433:

Fix Version/s: (was: 3.0.0-alpha3)
   (was: 2.6.6)
   (was: 2.7.4)
   (was: 2.8.0)

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
>  

[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13433:

Target Version/s: 2.9.0

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at sun.security.krb5.internal.TGSRep.(TGSRep.java:53)
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46)

[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838605#comment-15838605
 ] 

Steve Loughran commented on HADOOP-13433:
-

+1 for the branch-2 patch. As discussed, we can't backport the tests to branch-2

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 2.6.6, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at 

[jira] [Updated] (HADOOP-14023) S3Guard: DynamoDBMetadataStore logs nonsense region

2017-01-25 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14023:

Affects Version/s: HADOOP-13345
 Target Version/s: HADOOP-13345
 Priority: Minor  (was: Major)

> S3Guard: DynamoDBMetadataStore logs nonsense region
> ---
>
> Key: HADOOP-14023
> URL: https://issues.apache.org/jira/browse/HADOOP-14023
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HADOOP-13345
>Reporter: Sean Mackrory
>Priority: Minor
>
> When you specify a DynamoDB endpoint instead of it using the same one as the 
> S3 bucket, it thinks the region is "dynamodb":
> {code}
> > bin/hadoop s3a init -m dynamodb://region-test
> 2017-01-25 11:41:28,006 INFO s3guard.S3GuardTool: create metadata store: 
> dynamodb://region-test scheme: dynamodb
> 2017-01-25 11:41:28,116 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-01-25 11:41:29,421 INFO s3guard.DynamoDBMetadataStore: Creating 
> non-existent DynamoDB table region-test in region dynamodb
> 2017-01-25 11:41:34,637 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=dynamodb, tableName=region-test} is initialized.
> {code}
> My guess is it's because of DynamoDBMetadataStore.initialize will call 
> getEndpointPrefix, and all the published endpoints for DynamoDB actually 
> start with dynamodb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838568#comment-15838568
 ] 

Hadoop QA commented on HADOOP-13768:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
12s{color} | {color:green} hadoop-aliyun in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836376/HADOOP-13768.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cdda7e331c4b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / abedb8a |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11510/testReport/ |
| modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11510/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects 

[jira] [Commented] (HADOOP-13618) IllegalArgumentException when accessing Swift object with name containing space character

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838569#comment-15838569
 ] 

Hadoop QA commented on HADOOP-13618:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-openstack in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13618 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833270/HADOOP-13618.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 60ed06934293 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / abedb8a |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11509/testReport/ |
| modules | C: hadoop-tools/hadoop-openstack U: hadoop-tools/hadoop-openstack |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11509/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> IllegalArgumentException when accessing Swift object with name containing 
> space character
> -
>
> Key: HADOOP-13618
> URL: https://issues.apache.org/jira/browse/HADOOP-13618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
>Assignee: Yulei Li
> Fix For: 3.0.0-alpha3
>
> Attachments: avro_test.zip, HADOOP-13618.patch
>
>
> We are using 

[jira] [Commented] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838547#comment-15838547
 ] 

Hadoop QA commented on HADOOP-13805:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 220 unchanged - 4 fixed = 220 total (was 224) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13805 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849351/HADOOP-13805.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f228227cb6f8 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 
20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b782bf2 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11508/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11508/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> UGI.getCurrentUser() fails if user does not have a keytab associated
> 
>
> Key: HADOOP-13805
> URL: https://issues.apache.org/jira/browse/HADOOP-13805
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Attachments: HADOOP-13805.006.patch, 

[jira] [Updated] (HADOOP-13514) Upgrade maven surefire plugin to 2.19.1

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13514:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> Upgrade maven surefire plugin to 2.19.1
> ---
>
> Key: HADOOP-13514
> URL: https://issues.apache.org/jira/browse/HADOOP-13514
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13514.002.patch, HADOOP-13514-addendum.01.patch, 
> surefire-2.19.patch
>
>
> A lot of people working on Hadoop don't want to run all the tests when they 
> develop; only the bits they're working on. Surefire 2.19 introduced more 
> useful test filters which let us run a subset of the tests that brings the 
> build time down from 'come back tomorrow' to 'grab a coffee'.
> For instance, if I only care about the S3 adaptor, I might run:
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\"
> {code}
> We can work around this by specifying the surefire version on the command 
> line but it would be better, imo, to just update the default surefire used.
> {code}
> mvn test -Dmaven.javadoc.skip=true -Pdist,native -Djava.awt.headless=true 
> \"-Dtest=org.apache.hadoop.fs.*, org.apache.hadoop.hdfs.*, 
> org.apache.hadoop.fs.s3a.*\" -Dmaven-surefire-plugin.version=2.19.1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13767) Aliyun Connection broken when idle then 1 minutes or build than 3 hours

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13767:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> Aliyun Connection broken when idle then 1 minutes or build than 3 hours
> ---
>
> Key: HADOOP-13767
> URL: https://issues.apache.org/jira/browse/HADOOP-13767
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13618) IllegalArgumentException when accessing Swift object with name containing space character

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13618:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> IllegalArgumentException when accessing Swift object with name containing 
> space character
> -
>
> Key: HADOOP-13618
> URL: https://issues.apache.org/jira/browse/HADOOP-13618
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Affects Versions: 2.6.0
> Environment: Linux EL6
>Reporter: Steve Yang
>Assignee: Yulei Li
> Fix For: 3.0.0-alpha3
>
> Attachments: avro_test.zip, HADOOP-13618.patch
>
>
> We are using Spark and hadoop-openstack-2.6.0.jar 
> (compile('org.apache.hadoop:hadoop-openstack:2.6.0')) to access Oracle 
> Storage Service which is Swift-based:
> DataFrame df = 
> hiveCtx.read().format("com.databricks.spark.csv").option(...).load(objectName);
> When accessing a Swift URL like "swift://Linda.oracleswift/non-matching 
> records.csv" where the object name "non-matching records.csv" contains a 
> space character, the following exception is thrown:
> 2016-08-23 15:56:03 DEBUG SwiftNativeFileSystem:126 - SwiftFileSystem 
> initialized
> java.lang.IllegalArgumentException: Illegal character in path at index 13: 
> /non-matching records.csv
> at java.net.URI.create(URI.java:859)
> at 
> org.apache.hadoop.fs.swift.util.SwiftObjectPath.(SwiftObjectPath.java:59)
> at 
> org.apache.hadoop.fs.swift.util.SwiftObjectPath.fromPath(SwiftObjectPath.java:183)
> at 
> org.apache.hadoop.fs.swift.util.SwiftObjectPath.fromPath(SwiftObjectPath.java:145)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.toObjectPath(SwiftNativeFileSystemStore.java:434)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:211)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:181)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem.getFileStatus(SwiftNativeFileSystem.java:173)
> at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
> at org.apache.hadoop.fs.Globber.doGlob(Globber.java:272)
> at org.apache.hadoop.fs.Globber.glob(Globber.java:151)
> at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1653)
> at 
> org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:259)
> ...
> Apparently it is complaining about the space character. However, checking the 
> debug messages earlier before this error is raised we can see:
> 2016-08-23 15:56:03 DEBUG SwiftNativeFileSystem:122 - Initializing 
> SwiftNativeFileSystem against URI 
> swift://Linda.oracleswift/non-matching%20records.csv and working dir 
> swift://Linda.oracleswift/user/syang
> 2016-08-23 15:56:03 DEBUG RestClientBindings:141 - Filesystem 
> swift://Linda.oracleswift/non-matching%20records.csv is using configuration 
> keys fs.swift.service.oracleswift
> ...
> The space character has already been encoded into "%20" and so it seems the 
> Swift URL enters into SwiftNativeFileSystem is properly encoded.
> Because of this error any Swift object with file name contains space 
> character (and may be slash '/' character as well?) cannot be accessed.
> As an additional data point, if we first encode the object name("non-matching 
> records.csv"=>"non-matching%20records.csv") before giving it to OpenStack 
> Swift API, a different error is raised. This time somehow the path separator 
> '/' after the container name 'Linda' got encoded by 
> SwiftNativeFileSystemStore:
> 2016-08-23 10:56:41 DEBUG SwiftRestClient:1731 - Status code = 400
> 2016-08-23 10:56:41 DEBUG SwiftRestClient:1445 - Method HEAD on 
> https://storage.oraclecorp.com/v1/Storage-dfisher/Linda%2Fnon-matching%20records.csv
>  failed, status code: 400, status line: HTTP/1.1 400 Bad Request
> BadRequest: Bad request against 
> https://storage.oraclecorp.com/v1/Storage-dfisher/Linda%2Fnon-matching%20records.csv
>  HEAD 
> https://storage.oraclecorp.com/v1/Storage-dfisher/Linda%2Fnon-matching%20records.csv
>  => 400
> at 
> org.apache.hadoop.fs.swift.http.SwiftRestClient.buildException(SwiftRestClient.java:1456)
> at 
> org.apache.hadoop.fs.swift.http.SwiftRestClient.perform(SwiftRestClient.java:1403)
> at 
> org.apache.hadoop.fs.swift.http.SwiftRestClient.headRequest(SwiftRestClient.java:1016)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.stat(SwiftNativeFileSystemStore.java:257)
> at 
> org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystemStore.getObjectMetadata(SwiftNativeFileSystemStore.java:212)
> at 
> 

[jira] [Updated] (HADOOP-13768) AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13768:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> AliyunOSS: handle the failure in the batch delete operation `deleteDirs`.
> -
>
> Key: HADOOP-13768
> URL: https://issues.apache.org/jira/browse/HADOOP-13768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13768.001.patch, HADOOP-13768.002.patch
>
>
> Note in Aliyun OSS SDK, DeleteObjectsRequest has 1000 objects limit. This 
> needs to improve {{deleteDirs}} operation to make it pass when more objects 
> than the limit to delete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13433:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 2.6.6, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at 

[jira] [Updated] (HADOOP-13769) AliyunOSS: improve performance on close

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13769:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> AliyunOSS: improve performance on close
> ---
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch
>
>
> AliyunOSS object inputstream.close() will read the remaining bytes of the OSS 
> object, potentially transferring a lot of bytes from OSS that are discarded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13929:
-
Fix Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha3

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13985) s3guard: add a version marker to every table

2017-01-25 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838473#comment-15838473
 ] 

Aaron Fabbri commented on HADOOP-13985:
---

Thanks for doing this.  I agree this is a must-have.  Looks good overall, so 
+1.  Comments below

{code}
@@ -132,6 +133,13 @@ public boolean isValidName(String src) {
 CONSTRUCTOR_CACHE.put(theClass, meth);
   }
   result = meth.newInstance(uri, conf);
+} catch (InvocationTargetException e) {
+  Throwable cause = e.getCause();
+  if (cause instanceof RuntimeException) {
{code}

If you end up rolling another patch, should we add a comment here, e.g. "// 
preserve exception cause text"?

Overall, looks good to me.  I love the Java 8 stuff, but it is going to cause 
me a little pain backporting.

I'm baffled on getItem() creating a table.  The SDK docs don't seem to mention 
that behavior.


> s3guard: add a version marker to every table
> 
>
> Key: HADOOP-13985
> URL: https://issues.apache.org/jira/browse/HADOOP-13985
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13985-HADOOP-13345-001.patch, 
> HADOOP-13985-HADOOP-13345-002.patch
>
>
> This is something else we need before any preview: a way to identify a table 
> version, so that if future versions change the table structure:
> * older clients can recognise that it's a newer format, and fail
> * the future version can identify that it's an older format, and fail until 
> some fsck-upgrade operation has taken place
> I think something like a row on a path which is impossible in a real 
> filesystem, such as "../VERSION" would allow a version marker to go in; the 
> length field could be abused for the version number.
> This field would be something that'd be checked in init(), so be the simple 
> test for table existence we need for faster init



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14023) S3Guard: DynamoDBMetadataStore logs nonsense region

2017-01-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838463#comment-15838463
 ] 

Sean Mackrory commented on HADOOP-14023:


So my guess would appear to be right. Do note that it appears if you go through 
the code path in which the incorrect region gets picked up, it's only ever used 
in log and exception messages. So not the end of the world. But I'm not 
immediately seeing a more correct way to get the region you have connected to...

> S3Guard: DynamoDBMetadataStore logs nonsense region
> ---
>
> Key: HADOOP-14023
> URL: https://issues.apache.org/jira/browse/HADOOP-14023
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>
> When you specify a DynamoDB endpoint instead of it using the same one as the 
> S3 bucket, it thinks the region is "dynamodb":
> {code}
> > bin/hadoop s3a init -m dynamodb://region-test
> 2017-01-25 11:41:28,006 INFO s3guard.S3GuardTool: create metadata store: 
> dynamodb://region-test scheme: dynamodb
> 2017-01-25 11:41:28,116 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-01-25 11:41:29,421 INFO s3guard.DynamoDBMetadataStore: Creating 
> non-existent DynamoDB table region-test in region dynamodb
> 2017-01-25 11:41:34,637 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=dynamodb, tableName=region-test} is initialized.
> {code}
> My guess is it's because of DynamoDBMetadataStore.initialize will call 
> getEndpointPrefix, and all the published endpoints for DynamoDB actually 
> start with dynamodb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14020) Optimize dirListingUnion

2017-01-25 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838464#comment-15838464
 ] 

Sean Mackrory commented on HADOOP-14020:


mvninstall failed because of what I was describing above - glad to see it 
wasn't just me. Rebasing the branch should do it...

> Optimize dirListingUnion
> 
>
> Key: HADOOP-14020
> URL: https://issues.apache.org/jira/browse/HADOOP-14020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14020-HADOOP-13345.001.patch, 
> HADOOP-14020-HADOOP-13345.002.patch
>
>
> There's a TODO in dirListingUnion:
> {quote}// TODO optimize for when allowAuthoritative = false{quote}
> There will be cases when we can intelligently avoid a round trip: if S3A 
> results are a subset or the metadatastore results (including them being equal 
> or empty) then writing back will do nothing (although perhaps that should set 
> the authoritative flag if it isn't set already).
> There may also be cases where users want to just skip that altogether. It's 
> wasted work if authoritative mode is disabled, so perhaps we want to trigger 
> a skip if that's false, or perhaps it should be a separate property. First 
> one makes for simpler config, second is more flexible...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14020) Optimize dirListingUnion

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838454#comment-15838454
 ] 

Hadoop QA commented on HADOOP-14020:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
16s{color} | {color:red} root in HADOOP-13345 failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
44s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14020 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849349/HADOOP-14020-HADOOP-13345.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4d19d4ded85a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 31cee35 |
| Default Java | 1.8.0_121 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11507/artifact/patchprocess/branch-mvninstall-root.txt
 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11507/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11507/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Optimize dirListingUnion
> 
>
> Key: HADOOP-14020
> URL: https://issues.apache.org/jira/browse/HADOOP-14020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14020-HADOOP-13345.001.patch, 
> HADOOP-14020-HADOOP-13345.002.patch
>
>
> There's a 

[jira] [Updated] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated

2017-01-25 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-13805:
---
Attachment: HADOOP-13805.009.patch

Thanks [~tucu00] for the review again! Good comments, and here is rev 9 to 
address them!

> UGI.getCurrentUser() fails if user does not have a keytab associated
> 
>
> Key: HADOOP-13805
> URL: https://issues.apache.org/jira/browse/HADOOP-13805
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Attachments: HADOOP-13805.006.patch, HADOOP-13805.007.patch, 
> HADOOP-13805.008.patch, HADOOP-13805.009.patch, HADOOP-13805.01.patch, 
> HADOOP-13805.02.patch, HADOOP-13805.03.patch, HADOOP-13805.04.patch, 
> HADOOP-13805.05.patch
>
>
> HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the 
> UGI is created from an existing Subject as in that case the keytab is not 
> 'own' by UGI but by the creator of the Subject.
> In HADOOP-13558 we introduced a new private UGI constructor 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and 
> we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}.
> The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created 
> via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new 
> UserGroupInformation(subject)}} which will delegate to 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}}  and 
> that will use externalKeyTab == *FALSE*. 
> Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using 
> a non-existing keytab if the TGT expired.
> This problem is experienced in {{KMSClientProvider}} when used by the HDFS 
> filesystem client accessing an an encryption zone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13866:
-
Affects Version/s: 3.0.0-alpha1

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-01-25 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-13866:
-
Target Version/s: 3.0.0-beta1
Priority: Blocker  (was: Major)

Marking as blocker for beta1, since it's required for HBase 2.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14020) Optimize dirListingUnion

2017-01-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14020:
---
Status: Patch Available  (was: Open)

> Optimize dirListingUnion
> 
>
> Key: HADOOP-14020
> URL: https://issues.apache.org/jira/browse/HADOOP-14020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14020-HADOOP-13345.001.patch, 
> HADOOP-14020-HADOOP-13345.002.patch
>
>
> There's a TODO in dirListingUnion:
> {quote}// TODO optimize for when allowAuthoritative = false{quote}
> There will be cases when we can intelligently avoid a round trip: if S3A 
> results are a subset or the metadatastore results (including them being equal 
> or empty) then writing back will do nothing (although perhaps that should set 
> the authoritative flag if it isn't set already).
> There may also be cases where users want to just skip that altogether. It's 
> wasted work if authoritative mode is disabled, so perhaps we want to trigger 
> a skip if that's false, or perhaps it should be a separate property. First 
> one makes for simpler config, second is more flexible...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14020) Optimize dirListingUnion

2017-01-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14020:
---
Attachment: HADOOP-14020-HADOOP-13345.002.patch

Attaching a patch that just uses authoritative mode to enable / disable the new 
logic, and also fixing some javadoc errors. 

Not sure what was up with my failure to build all of a sudden yesterday. 
hadoop-kms was not building the classes or tests artifacts, and the failure to 
find them got blamed on the DynamoDB repo. Rebasing my local branch on the 
latest trunk did the trick...

> Optimize dirListingUnion
> 
>
> Key: HADOOP-14020
> URL: https://issues.apache.org/jira/browse/HADOOP-14020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14020-HADOOP-13345.001.patch, 
> HADOOP-14020-HADOOP-13345.002.patch
>
>
> There's a TODO in dirListingUnion:
> {quote}// TODO optimize for when allowAuthoritative = false{quote}
> There will be cases when we can intelligently avoid a round trip: if S3A 
> results are a subset or the metadatastore results (including them being equal 
> or empty) then writing back will do nothing (although perhaps that should set 
> the authoritative flag if it isn't set already).
> There may also be cases where users want to just skip that altogether. It's 
> wasted work if authoritative mode is disabled, so perhaps we want to trigger 
> a skip if that's false, or perhaps it should be a separate property. First 
> one makes for simpler config, second is more flexible...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838352#comment-15838352
 ] 

Steve Moist commented on HADOOP-13075:
--

Sounds good, I'll make changes around the Enum and convert the if statements to 
a switch.

> L 912 : What if the user has selected SSE_C and the server key is blank? 
> That's downgraded to a no-op, now, yes? I'd rather that was treated
as a failure: if you ask for server-side encryption and don't supply a key, FS 
init must fail.

I'll move that check into initialize.  Is there a preference on what type of 
exception to be thrown?

>Blocker: The server-side encryption key must be supported; use 
>S3AUtils.getPassword for this.

By this do you mean convert  instances of   
request.setSSECustomerKey(new SSECustomerKey(serverSideEncryptionKey));
with
request.setSSECustomerKey(new SSECustomerKey(S3AUtils.getPassword(getConf(), 
Constants.SERVER_SIDE_ENCRYPTION_KEY));



> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14023) S3Guard: DynamoDBMetadataStore logs nonsense region

2017-01-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14023:
---
Summary: S3Guard: DynamoDBMetadataStore logs nonsense region  (was: 
S3Guard: )

> S3Guard: DynamoDBMetadataStore logs nonsense region
> ---
>
> Key: HADOOP-14023
> URL: https://issues.apache.org/jira/browse/HADOOP-14023
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>
> When you specify a DynamoDB endpoint instead of it using the same one as the 
> S3 bucket, it thinks the region is "dynamodb":
> {code}
> > bin/hadoop s3a init -m dynamodb://region-test
> 2017-01-25 11:41:28,006 INFO s3guard.S3GuardTool: create metadata store: 
> dynamodb://region-test scheme: dynamodb
> 2017-01-25 11:41:28,116 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-01-25 11:41:29,421 INFO s3guard.DynamoDBMetadataStore: Creating 
> non-existent DynamoDB table region-test in region dynamodb
> 2017-01-25 11:41:34,637 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=dynamodb, tableName=region-test} is initialized.
> {code}
> My guess is it's because of DynamoDBMetadataStore.initialize will call 
> getEndpointPrefix, and all the published endpoints for DynamoDB actually 
> start with dynamodb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14023) S3Guard:

2017-01-25 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14023:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13345

> S3Guard: 
> -
>
> Key: HADOOP-14023
> URL: https://issues.apache.org/jira/browse/HADOOP-14023
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sean Mackrory
>
> When you specify a DynamoDB endpoint instead of it using the same one as the 
> S3 bucket, it thinks the region is "dynamodb":
> {code}
> > bin/hadoop s3a init -m dynamodb://region-test
> 2017-01-25 11:41:28,006 INFO s3guard.S3GuardTool: create metadata store: 
> dynamodb://region-test scheme: dynamodb
> 2017-01-25 11:41:28,116 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2017-01-25 11:41:29,421 INFO s3guard.DynamoDBMetadataStore: Creating 
> non-existent DynamoDB table region-test in region dynamodb
> 2017-01-25 11:41:34,637 INFO s3guard.S3GuardTool: Metadata store 
> DynamoDBMetadataStore{region=dynamodb, tableName=region-test} is initialized.
> {code}
> My guess is it's because of DynamoDBMetadataStore.initialize will call 
> getEndpointPrefix, and all the published endpoints for DynamoDB actually 
> start with dynamodb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14023) S3Guard:

2017-01-25 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14023:
--

 Summary: S3Guard: 
 Key: HADOOP-14023
 URL: https://issues.apache.org/jira/browse/HADOOP-14023
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory


When you specify a DynamoDB endpoint instead of it using the same one as the S3 
bucket, it thinks the region is "dynamodb":

{code}
> bin/hadoop s3a init -m dynamodb://region-test
2017-01-25 11:41:28,006 INFO s3guard.S3GuardTool: create metadata store: 
dynamodb://region-test scheme: dynamodb
2017-01-25 11:41:28,116 WARN util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
2017-01-25 11:41:29,421 INFO s3guard.DynamoDBMetadataStore: Creating 
non-existent DynamoDB table region-test in region dynamodb
2017-01-25 11:41:34,637 INFO s3guard.S3GuardTool: Metadata store 
DynamoDBMetadataStore{region=dynamodb, tableName=region-test} is initialized.
{code}

My guess is it's because of DynamoDBMetadataStore.initialize will call 
getEndpointPrefix, and all the published endpoints for DynamoDB actually start 
with dynamodb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14017) Integrate ADLS ACL

2017-01-25 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838339#comment-15838339
 ] 

John Zhuge commented on HADOOP-14017:
-

Thanks [~vishwajeet.dusane] for the info.

> Integrate ADLS ACL
> --
>
> Key: HADOOP-14017
> URL: https://issues.apache.org/jira/browse/HADOOP-14017
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Track the effort to integrate ADLS ACL which models after HDFS ACL. Both are 
> based on POSIX ACL.
> Of course this will go too far without AuthN integration of some sort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838317#comment-15838317
 ] 

Steve Loughran commented on HADOOP-13075:
-

Steve, thanks for taking this up; we'll give you and Frederico joint credit 
once this is in.

# Start with trunk only not branch-2. Or work in branch-2 only and plan to 
patch forwards...trying to keep 2 branches in sync only adds complications. 
This month s3guard branch is the active one, so keep an eye out for any changes 
there that would break things —this patch, being simpler, is likelier to get in 
first.

# We do have a strict "declare the s3 endpoint you tested against" policy for 
all the object stores; that's something you'll be expected to declare on every 
patch. Do mention if you saw a transient failure here, always good to keep an 
eye on those too. And do run {{ -Dparallel-tests  -DtestsThreadCount=8}} for 
execution speed.

Overall design seems OK, though it'll need some revisions, In particular, I 
don't like all the 
f(if(S3AEncryptionMethods.SSE_KMS.getMethod().equals(serverSideEncryptionAlgorithm))
 checks everywhere
{{S3AEncryptionMethods}} is an enum; add one for "None" and then you can do 
case statements around them; conversion of the config option to an enum value 
is a method which can be tucked into the enum itself.

Blocker: The server-side encryption key must be supported; use 
{{S3AUtils.getPassword}} for this.


h3. {{S3AFileSystem}}

L 912 : What if the user has selected SSE_C and the server key is blank? That's 
downgraded to a no-op, now, yes? I'd rather that was treated
as a failure: if you ask for server-side encryption and don't supply a key, FS 
init must fail.

h3. {{S3AEncryptionMethods}}

L 104-119. seems like a spurious IDE-initiated reformat. not needed, as it only 
complicates merging branches.



> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13927) ADLS TestAdlContractRootDirLive.testRmNonEmptyRootDirNonRecursive failed

2017-01-25 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838196#comment-15838196
 ] 

Vishwajeet Dusane commented on HADOOP-13927:


[~jzhuge] and [~twu] for the analysis. I will confirm once the fix is reflected 
in the backend.

> ADLS TestAdlContractRootDirLive.testRmNonEmptyRootDirNonRecursive failed
> 
>
> Key: HADOOP-13927
> URL: https://issues.apache.org/jira/browse/HADOOP-13927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>
> {noformat}
> Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 18.095 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive
> testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive)
>   Time elapsed: 1.085 sec  <<< FAILURE!
> java.lang.AssertionError: non recursive delete should have raised an 
> exception, but completed with exit code false
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:132)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13927) ADLS TestAdlContractRootDirLive.testRmNonEmptyRootDirNonRecursive failed

2017-01-25 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane reassigned HADOOP-13927:
--

Assignee: Vishwajeet Dusane  (was: John Zhuge)

> ADLS TestAdlContractRootDirLive.testRmNonEmptyRootDirNonRecursive failed
> 
>
> Key: HADOOP-13927
> URL: https://issues.apache.org/jira/browse/HADOOP-13927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Vishwajeet Dusane
>Priority: Critical
>
> {noformat}
> Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 18.095 sec 
> <<< FAILURE! - in org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive
> testRmNonEmptyRootDirNonRecursive(org.apache.hadoop.fs.adl.live.TestAdlContractRootDirLive)
>   Time elapsed: 1.085 sec  <<< FAILURE!
> java.lang.AssertionError: non recursive delete should have raised an 
> exception, but completed with exit code false
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest.testRmNonEmptyRootDirNonRecursive(AbstractContractRootDirectoryTest.java:132)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13979) Live unit tests leak files in home dir on ADLS store

2017-01-25 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane reassigned HADOOP-13979:
--

Assignee: Vishwajeet Dusane

> Live unit tests leak files in home dir on ADLS store
> 
>
> Key: HADOOP-13979
> URL: https://issues.apache.org/jira/browse/HADOOP-13979
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Vishwajeet Dusane
>Priority: Minor
>
> Live unit tests left 61 files in user home dir on ADLS store {{jzadls}}:
> {noformat}
> /user
> /user/jzhuge
> /user/jzhuge/06b74549-c9d5-41b3-9f32-660e3284200d
> /user/jzhuge/0b71b60d-7501-40b2-a86c-c1ed2542997f
> /user/jzhuge/1311d721-8a31-4eda-9d5b-be4fc47ce62a
> ...
> {noformat}
> However, failed to reproduce on store {{jzhugeadls}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13979) Live unit tests leak files in home dir on ADLS store

2017-01-25 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838194#comment-15838194
 ] 

Vishwajeet Dusane commented on HADOOP-13979:


Thanks [~jzhuge] and [~ste...@apache.org], i will debug in to this issue.

> Live unit tests leak files in home dir on ADLS store
> 
>
> Key: HADOOP-13979
> URL: https://issues.apache.org/jira/browse/HADOOP-13979
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Priority: Minor
>
> Live unit tests left 61 files in user home dir on ADLS store {{jzadls}}:
> {noformat}
> /user
> /user/jzhuge
> /user/jzhuge/06b74549-c9d5-41b3-9f32-660e3284200d
> /user/jzhuge/0b71b60d-7501-40b2-a86c-c1ed2542997f
> /user/jzhuge/1311d721-8a31-4eda-9d5b-be4fc47ce62a
> ...
> {noformat}
> However, failed to reproduce on store {{jzhugeadls}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13982) Print better error when accessing a store without permission

2017-01-25 Thread Vishwajeet Dusane (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishwajeet Dusane updated HADOOP-13982:
---
Assignee: Atul Sikaria

> Print better error when accessing a store without permission
> 
>
> Key: HADOOP-13982
> URL: https://issues.apache.org/jira/browse/HADOOP-13982
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Atul Sikaria
>  Labels: supportability
>
> The error message when accessing a store without permission is not user 
> friendly:
> {noformat}
> $ hdfs dfs -ls adl://STORE.azuredatalakestore.net/
> ls: Operation GETFILESTATUS failed with HTTP403 : null
> {noformat}
> Store {{STORE}} exists but Hadoop is configured with an SPI that does not 
> have access to the store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14017) Integrate ADLS ACL

2017-01-25 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838171#comment-15838171
 ] 

Vishwajeet Dusane commented on HADOOP-14017:


Thanks [~jzhuge], This is a known [limitation 
|https://hadoop.apache.org/docs/current/hadoop-azure-datalake/index.html] and 
provokes user to configure ACL from [Azure portal|https://ms.portal.azure.com] 
till the functionality for {{setfacl}} and {{getfacl}} is available.
{quote}
User and group information returned as ListStatus and GetFileStatus is in form 
of GUID associated in Azure Active Directory.
{quote}

However we are already working (Under testing) on supporting user friendly name 
as a configurable option instead of GUID associated with Azure Active Directory.

> Integrate ADLS ACL
> --
>
> Key: HADOOP-14017
> URL: https://issues.apache.org/jira/browse/HADOOP-14017
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha3
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> Track the effort to integrate ADLS ACL which models after HDFS ACL. Both are 
> based on POSIX ACL.
> Of course this will go too far without AuthN integration of some sort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13988) KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser

2017-01-25 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838151#comment-15838151
 ] 

Xiaoyu Yao commented on HADOOP-13988:
-

Thanks [~lmccay] for the detail of knox use case. Knox end user access webhdfs 
using proxy user from a token user - knox (with hdfs-dt). 

bq. Knox doesn't use UGI at all.
On the DN side, the webhdfs create UGI based on the deserialized cookie, which 
is the currentUGI. However, it does not have either Kerberos credential or KMS 
delegation token. To access KMS for encrypted files, the right UGI would be the 
DN's loginUser (with local kerberos credential), which fits the logic below in 
the latest patch.  

{code}
 if (!containsKmsDt(actualUgi) && !actualUgi.hasKerberosCredentials()) {
...
 actualUgi = UserGroupInformation.getLoginUser();
{code} 

> KMSClientProvider does not work with WebHDFS and Apache Knox w/ProxyUser
> 
>
> Key: HADOOP-13988
> URL: https://issues.apache.org/jira/browse/HADOOP-13988
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, kms
>Affects Versions: 2.8.0, 2.7.3
> Environment: HDP 2.5.3.0 
> WebHDFSUser --> Knox --> HA NameNodes(WebHDFS) --> DataNodes
>Reporter: Greg Senia
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-13988.01.patch, HADOOP-13988.02.patch, 
> HADOOP-13988.patch, HADOOP-13988.patch
>
>
> After upgrading to HDP 2.5.3.0 noticed that all of the KMSClientProvider 
> issues have not been resolved. We put a test build together and applied 
> HADOOP-13558 and HADOOP-13749 these two fixes did still not solve the issue 
> with requests coming from WebHDFS through to Knox to a TDE zone.
> So we added some debug to our build and determined effectively what is 
> happening here is a double proxy situation which does not seem to work. So we 
> propose the following fix in getActualUgi Method:
> {noformat}
>  }
>  // Use current user by default
>  UserGroupInformation actualUgi = currentUgi;
>  if (currentUgi.getRealUser() != null) {
>// Use real user for proxy user
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using RealUser for proxyUser);
>   }
>actualUgi = currentUgi.getRealUser();
>if (getDoAsUser() != null) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("doAsUser exists");
>   LOG.debug("currentUGI realUser shortName: {}", 
> currentUgi.getRealUser().getShortUserName());
>   LOG.debug("processUGI loginUser shortName: {}", 
> UserGroupInformation.getLoginUser().getShortUserName());
>   }
> if (currentUgi.getRealUser().getShortUserName() != 
> UserGroupInformation.getLoginUser().getShortUserName()) {
> if (LOG.isDebugEnabled()) {
>   LOG.debug("currentUGI.realUser does not match 
> UGI.processUser);
> }
> actualUgi = UserGroupInformation.getLoginUser();
> if (LOG.isDebugEnabled()) {
>   LOG.debug("LoginUser for Proxy: {}", 
> actualUgi.getLoginUser());
> }
> }
>}
>   
>  } else if (!currentUgiContainsKmsDt() &&
>  !currentUgi.hasKerberosCredentials()) {
>// Use login user for user that does not have either
>// Kerberos credential or KMS delegation token for KMS operations
>if (LOG.isDebugEnabled()) {
>  LOG.debug("using loginUser no KMS Delegation Token no Kerberos 
> Credentials");
>   }
>actualUgi = currentUgi.getLoginUser();
>  }
>  return actualUgi;
>}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-01-25 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838127#comment-15838127
 ] 

Vishwajeet Dusane commented on HADOOP-13929:


Thanks you [~jzhuge] for trimming to patch 9. It is much easier to review. I 
ran adl live test with Patch 9 and all test passed. +1 on the overall Patch 9.

Regarding semantics to follow across file system,

- As per 
[comments|https://issues.apache.org/jira/browse/HADOOP-13257?focusedCommentId=15325102=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15325102],
 to use {{test.fs.adl.name}} similar to {{test.fs.s3n.name}} and 
{{test.fs.s3a.name}}.

Would leave it to [~cnauroth] and [~ste...@apache.org] to comment on the 
guideline.

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha2
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread Steve Moist (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838104#comment-15838104
 ] 

Steve Moist commented on HADOOP-13075:
--

Hi, I've taken Federico's changes and updated them to be consistent with 
trunk's changes and submitted a pull request to merge into trunk.  I will be 
working on a separate one for branch-2

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-01-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838100#comment-15838100
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

GitHub user orngejaket opened a pull request:

https://github.com/apache/hadoop/pull/183

HADOOP-13075 adding sse-c and sse-kms

Updating fedecz changes to work with changes in trunk.  Refactored methods 
to be clearer as to purpose and added more integration tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/orngejaket/hadoop HADOOP-13075-trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #183


commit eba581f46ab779505862755e263f4c5b68adf0a1
Author: Stephen Moist 
Date:   2017-01-25T16:40:12Z

HADOOP-13075 importing existing changes to support SSEKMS and SSEC.  Adding 
integration tests, documentation and required constants.




> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Andrew Olson
>Assignee: Federico Czerwinski
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13887) Support for client-side encryption in S3A file system

2017-01-25 Thread Swaranga Sarma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15838094#comment-15838094
 ] 

Swaranga Sarma commented on HADOOP-13887:
-

Ping

> Support for client-side encryption in S3A file system
> -
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Jeeyoung Kim
>Priority: Minor
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837827#comment-15837827
 ] 

Hadoop QA commented on HADOOP-13433:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
35s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 94 unchanged - 2 fixed = 95 total (was 96) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
29s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Failed junit tests | hadoop.ipc.TestRPCWaitForProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-13433 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12849290/HADOOP-13433-branch-2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 85ea8d7b3fa9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837739#comment-15837739
 ] 

Duo Zhang commented on HADOOP-13433:


[~steve_l] Thanks!

Yes it is only the tests. I've just upload the patch for branch-2 which only 
contains fix for UGI.

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> 

[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-13433:
---
Attachment: HADOOP-13433-branch-2.patch

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>
> Attachments: HADOOP-13433-branch-2.patch, HADOOP-13433.patch, 
> HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, 
> HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at 

[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HADOOP-13433:
---
Attachment: HADOOP-13433-v6.patch

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>
> Attachments: HADOOP-13433.patch, HADOOP-13433-v1.patch, 
> HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, HADOOP-13433-v5.patch, 
> HADOOP-13433-v6.patch, HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58)
> at sun.security.krb5.internal.TGSRep.(TGSRep.java:53)
> at 

[jira] [Commented] (HADOOP-13805) UGI.getCurrentUser() fails if user does not have a keytab associated

2017-01-25 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837592#comment-15837592
 ] 

Alejandro Abdelnur commented on HADOOP-13805:
-

patch8 looks good. 

A few things regarding the new {{enableRenewThreadCreationForTest}}, can we 
make the methods package private (the testcases using them are in the same 
package, and having them package private will avoid an app setting them by 
mistake). Can we also log a WARN message in the 
{{spawnAutoRenewalThreadForUserCreds()}} if in test mode?

> UGI.getCurrentUser() fails if user does not have a keytab associated
> 
>
> Key: HADOOP-13805
> URL: https://issues.apache.org/jira/browse/HADOOP-13805
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.9.0, 3.0.0-alpha2
>Reporter: Alejandro Abdelnur
>Assignee: Xiao Chen
> Attachments: HADOOP-13805.006.patch, HADOOP-13805.007.patch, 
> HADOOP-13805.008.patch, HADOOP-13805.01.patch, HADOOP-13805.02.patch, 
> HADOOP-13805.03.patch, HADOOP-13805.04.patch, HADOOP-13805.05.patch
>
>
> HADOOP-13558 intention was to avoid UGI from trying to renew the TGT when the 
> UGI is created from an existing Subject as in that case the keytab is not 
> 'own' by UGI but by the creator of the Subject.
> In HADOOP-13558 we introduced a new private UGI constructor 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}} and 
> we use with TRUE only when doing a {{UGI.loginUserFromSubject()}}.
> The problem is, when we call {{UGI.getCurrentUser()}}, and UGI was created 
> via a Subject (via the {{UGI.loginUserFromSubject()}} method), we call {{new 
> UserGroupInformation(subject)}} which will delegate to 
> {{UserGroupInformation(Subject subject, final boolean externalKeyTab)}}  and 
> that will use externalKeyTab == *FALSE*. 
> Then the UGI returned by {{UGI.getCurrentUser()}} will attempt to login using 
> a non-existing keytab if the TGT expired.
> This problem is experienced in {{KMSClientProvider}} when used by the HDFS 
> filesystem client accessing an an encryption zone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13876) S3Guard: better support for multi-bucket access

2017-01-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837550#comment-15837550
 ] 

Steve Loughran commented on HADOOP-13876:
-

LGTM

+1

This'll break any existing tables; they will all need to be destroyed and 
rebuilt

on that topic, can we get HADOOP-13985 reviewed? Its trying to do the version 
tracking so that  after any format changes, we can detect and react (fail, or 
in future upgrade)



> S3Guard: better support for multi-bucket access
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13876-HADOOP-13345.000.patch, 
> HADOOP-13876-HADOOP-13345.001.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> However, if a user sets {{fs.s3a.s3guard.ddb.table}} and accesses multiple 
> buckets, DynamoDBMetadataStore does not properly differentiate between paths 
> belonging to different buckets.  For example, it would treat 
> s3a://bucket-a/path1 as the same as s3a://bucket-b/path1.
> Goals for this JIRA:
> - Allow for a "one DynamoDB table per cluster" configuration.  If a user 
> accesess multiple buckets with that single table, it should work correctly.  
> - Explain which credentials are used for DynamoDB.  Currently each 
> S3AFileSystem has its own DynamoDBMetadataStore, which uses the credentials 
> from the S3A fs.   We at least need to document this behavior.
> - Document any other limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13876) S3Guard: better support for multi-bucket access

2017-01-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837512#comment-15837512
 ] 

Steve Loughran commented on HADOOP-13876:
-

I'll look at this —thank you for putting in all the work

one thing: HADOOP-13985 needs a path in the table which is guaranteed never to 
appear in an FS, to use as a version marker.


I'd gone for "..\VERSION", as a relative path would never get in. It will still 
hold here, won't it?

> S3Guard: better support for multi-bucket access
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13876-HADOOP-13345.000.patch, 
> HADOOP-13876-HADOOP-13345.001.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> However, if a user sets {{fs.s3a.s3guard.ddb.table}} and accesses multiple 
> buckets, DynamoDBMetadataStore does not properly differentiate between paths 
> belonging to different buckets.  For example, it would treat 
> s3a://bucket-a/path1 as the same as s3a://bucket-b/path1.
> Goals for this JIRA:
> - Allow for a "one DynamoDB table per cluster" configuration.  If a user 
> accesess multiple buckets with that single table, it should work correctly.  
> - Explain which credentials are used for DynamoDB.  Currently each 
> S3AFileSystem has its own DynamoDBMetadataStore, which uses the credentials 
> from the S3A fs.   We at least need to document this behavior.
> - Document any other limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-01-25 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15837507#comment-15837507
 ] 

Steve Loughran commented on HADOOP-13433:
-

It's only the tests which can't go into branch-2, correct? If so, don't worry 
about it: put the fix in the production code, and rely on trunk to host the 
tests

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2, 2.6.6
>
> Attachments: HADOOP-13433.patch, HADOOP-13433-v1.patch, 
> HADOOP-13433-v2.patch, HADOOP-13433-v4.patch, HADOOP-13433-v5.patch, 
> HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594)
> ... 26 more
> Caused by: KrbException: Identifier doesn't match expected value (906)
> at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133)
> 

[jira] [Updated] (HADOOP-13876) S3Guard: better support for multi-bucket access

2017-01-25 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13876:
--
Attachment: HADOOP-13876-HADOOP-13345.001.patch

Attaching v001 patch which fixes DynamoDBMetadataStore to allow a single 
DynamoDB table (fs.s3a.s3guard.ddb.table) to be used when accessing multiple 
buckets.

- All paths given to DynamoDBMetadataStore must be absolute, and contain a host
  (bucket) component.  S3AFileSystem already does this, but some DDB tests had
  to be fixed.

- In the DynamoDB table, the parent key now includes the bucket name as the
  first component of the path.

- Remove assumptions in DynamoDBMetadataStore about only being used for one
  bucket (e.g. delete uses of s3uri member)

- Fix bug where use of the new initialize(Configuration) method would cause
  errors due to missing s3uri member.  May also fix an issue around accidential
  discard of return value from removeSchemeAndAuthority(Path).

- Update S3Guard site docs to include information on setting the DDB endpoint.

- Add new test case MetadataStoreTestBase#testMultiBucketPaths().  Also remove
  some dead code I ran into there.

Testing: Ran all s3a/s3guard integration and scale tests against Us West 2 
(oregon).
{quote}
Tests in error:
  ITestS3AAWSCredentialsProvider.testAnonymousProvider:132 » AWSServiceIO 
initTa...
{quote}
AmazonDynamoDBException: Request is missing Authentication Token
{quote}
  ITestS3ACredentialsInURL.testInstantiateFromURL:86 » InterruptedIO initTable: 
...
{quote}
No AWS Credentials provided
{quote}
 
ITestS3AFileSystemContract>FileSystemContractBaseTest.testRenameToDirWithSamePrefixAllowed:669->FileSystemContractBaseTest.rename:525
 » AWSServiceIO
{quote}
AmazonDynamoDBException: Provided list of item keys contains duplicates
{quote}
Tests run: 361, Failures: 0, Errors: 3, Skipped: 70
{quote}

Related: We need some followup work on DDB region and endpoint.  It seems like 
we should mirror what the AWS SDK does here:  give you an easy option to set a 
region, and also expose ability to set endpoint, but document that as for 
advanced users (the usual way to set up a DDB client being just to specify the 
region).  Also I think [~mackrorysd] mentioned his DDB tables were getting set 
up in the wrong region?

> S3Guard: better support for multi-bucket access
> ---
>
> Key: HADOOP-13876
> URL: https://issues.apache.org/jira/browse/HADOOP-13876
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
> Attachments: HADOOP-13876-HADOOP-13345.000.patch, 
> HADOOP-13876-HADOOP-13345.001.patch
>
>
> HADOOP-13449 adds support for DynamoDBMetadataStore.
> The code currently supports two options for choosing DynamoDB table names:
> 1. Use name of each s3 bucket and auto-create a DynamoDB table for each.
> 2. Configure a table name in the {{fs.s3a.s3guard.ddb.table}} parameter.
> However, if a user sets {{fs.s3a.s3guard.ddb.table}} and accesses multiple 
> buckets, DynamoDBMetadataStore does not properly differentiate between paths 
> belonging to different buckets.  For example, it would treat 
> s3a://bucket-a/path1 as the same as s3a://bucket-b/path1.
> Goals for this JIRA:
> - Allow for a "one DynamoDB table per cluster" configuration.  If a user 
> accesess multiple buckets with that single table, it should work correctly.  
> - Explain which credentials are used for DynamoDB.  Currently each 
> S3AFileSystem has its own DynamoDBMetadataStore, which uses the credentials 
> from the S3A fs.   We at least need to document this behavior.
> - Document any other limitations etc. in the s3guard.md site doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org