[jira] [Created] (HADOOP-17142) Fix outdated properties of journal node when perform rollback

2020-07-20 Thread Deegue (Jira)
Deegue created HADOOP-17142:
---

 Summary: Fix outdated properties of journal node when perform 
rollback
 Key: HADOOP-17142
 URL: https://issues.apache.org/jira/browse/HADOOP-17142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Deegue


When rollback HDFS cluster, properties in JNStorage won't be refreshed after 
the storage dir changed. It leads to exceptions when starting namenode.

The exception like:
{code:java}
2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] 
org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: 
recoverUnfinalizedSegments failed for required journal 
(JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 
10.0.118.179:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions 
to achieve quorum size 2/3. 3 exceptions thrown:
10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory 
/mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but 
storage has nsId 0
at 
org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236)
at 
org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
at 
org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
at 
org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17141) Add Capability To Get Text Length

2020-07-20 Thread David Mollitor (Jira)
David Mollitor created HADOOP-17141:
---

 Summary: Add Capability To Get Text Length
 Key: HADOOP-17141
 URL: https://issues.apache.org/jira/browse/HADOOP-17141
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: David Mollitor
Assignee: David Mollitor






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-20 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17136.
-
Resolution: Fixed

> ITestS3ADirectoryPerformance.testListOperations failing
> ---
>
> Key: HADOOP-17136
> URL: https://issues.apache.org/jira/browse/HADOOP-17136
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Minor
> Fix For: 3.4.0
>
>
> Because of HADOOP-17022
> [INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 670.029 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
> [ERROR] 
> testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
>   Time elapsed: 44.089 s  <<< FAILURE!
> java.lang.AssertionError: object_list_requests starting=166 current=167 
> diff=1 expected:<2> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17140) KMSClientProvider Sends HTTP GET with null "Content-Type" Header

2020-07-20 Thread Axton Grams (Jira)
Axton Grams created HADOOP-17140:


 Summary: KMSClientProvider Sends HTTP GET with null "Content-Type" 
Header
 Key: HADOOP-17140
 URL: https://issues.apache.org/jira/browse/HADOOP-17140
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.7.3
Reporter: Axton Grams


Hive Server uses 'org.apache.hadoop.crypto.key.kms.KMSClientProvider' when 
interacting with HDFS TDE zones. This triggers a call to the KMS server. If the 
request method is a GET, the HTTP Header Content-Type is sent with a null value.

When using Ranger KMS, the embedded Tomcat server returns a HTTP 400 error with 
the following error message:
{quote}HTTP Status 400 - Bad Content-Type header value: ''
 The request sent by the client was syntactically incorrect.
{quote}
This only occurs with HTTP GET method calls. 

This is a captured HTTP request:

 
{code:java}
GET /kms/v1/key/xxx/_metadata?doAs=yyy&doAs=yyy HTTP/1.1
Cookie: 
hadoop.auth="u=hive&p=hive/domain@domain.com&t=kerberos-dt&e=123789456&s=xxx="
Content-Type:
Cache-Control: no-cache
Pragma: no-cache
User-Agent: Java/1.8.0_241
Host: kms.domain.com:9292
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive{code}
 

Note the empty 'Content-Type' header.

And the corresponding response:

 
{code:java}
HTTP/1.1 400 Bad Request
Server: Apache-Coyote/1.1
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 1034
Date: Thu, 16 Jul 2020 04:23:18 GMT
Connection: close{code}
 

This is the stack trace from the Hive server:

 
{code:java}
Caused by: java.io.IOException: HTTP status [400], message [Bad Request]
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:169)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:608)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:597)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:566)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getMetadata(KMSClientProvider.java:861)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.compareKeyStrength(Hadoop23Shims.java:1506)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.comparePathKeyStrength(Hadoop23Shims.java:1442)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.comparePathKeyStrength(SemanticAnalyzer.java:1990)
... 38 more{code}
 

This looks to occur in 
[https://github.com/hortonworks/hadoop-release/blob/HDP-2.6.5.165-3-tag/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java#L591-L599]
{code:java}
  if (authRetryCount > 0) {
String contentType = conn.getRequestProperty(CONTENT_TYPE);
String requestMethod = conn.getRequestMethod();
URL url = conn.getURL();
conn = createConnection(url, requestMethod);
conn.setRequestProperty(CONTENT_TYPE, contentType);
return call(conn, jsonOutput, expectedResponse, klass,
authRetryCount - 1);
  }{code}
 I think when a GET method is received, the Content-Type header is not defined, 
then in line 592:
{code:java}
 String contentType = conn.getRequestProperty(CONTENT_TYPE);
{code}
The code attempts to retrieve the CONTENT_TYPE Request Property, which returns 
null.

Then in line 596:
{code:java}
conn.setRequestProperty(CONTENT_TYPE, contentType);
{code}
The null content type is used to construct the HTTP call to the KMS server.

A null Content-Type header is not allowed/considered malformed by the receiving 
KMS server.

I propose this code be updated to inspect the value returned by 
conn.getRequestProperty(CONTENT_TYPE), and not use a null value to construct 
the new KMS connection.

Proposed pseudo-patch:
{code:java}
--- 
a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
+++ 
b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
@@ -593,7 +593,9 @@ public HttpURLConnection run() throws Exception {
 String requestMethod = conn.getRequestMethod();
 URL url = conn.getURL();
 conn = createConnection(url, requestMethod);
-conn.setRequestProperty(CONTENT_TYPE, contentType);
+if (conn.getRequestProperty(CONTENT_TYPE) != null) {
+  conn.setRequestProperty(CONTENT_TYPE, contentType);
+}
 return call(conn, jsonOutput, expectedResponse, klass,
 authRetryCount - 1);
   }{code}
This should not impact any other use of this class and should only address 
cases where a null is returned for Content-Type.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/753/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.