[jira] [Created] (HADOOP-16660) ABFS: Make RetryCount in ExponentialRetryPolicy Configurable

2019-10-17 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-16660:
--

 Summary: ABFS: Make RetryCount in ExponentialRetryPolicy 
Configurable
 Key: HADOOP-16660
 URL: https://issues.apache.org/jira/browse/HADOOP-16660
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan
 Fix For: 3.3.0


ExponentialRetryPolicy will retry requests 30 times which is the default retry 
count in code whe HttpRequests fail. 

If the client too has re-try handling, the process seems hanged to the client 
App due to the high number of retries. This Jira aims to provide a config 
control for retry count.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954214#comment-16954214
 ] 

Lisheng Sun edited comment on HADOOP-8159 at 10/18/19 2:47 AM:
---

hi [~cmccabe] [~weichiu] [~elgoiri] [~ayushtkn]

the method of NetworkTopology#add

 
{code:java}
/** Add a leaf node
 * Update node counter & rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
  
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
 if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
  }
 

{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}
{code}
so i think if (!(node instanceof InnerNode)) should be removed.  Please correct 
me if i was wrong. Thank you.

 


was (Author: leosun08):
hi [~cmccabe] [~weichiu] [~elgoiri] [~ayushtkn]

the method of NetworkTopology#add

 
{code:java}
/** Add a leaf node
 * Update node counter & rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
  
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
 if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
  }
 

{code}
so i think if (!(node instanceof InnerNode)) should be removed.  Please correct 
me if i was wrong. Thank you.

 

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954214#comment-16954214
 ] 

Lisheng Sun commented on HADOOP-8159:
-

hi [~cmccabe] [~weichiu] [~elgoiri] [~ayushtkn]

the method of NetworkTopology#add

 
{code:java}
/** Add a leaf node
 * Update node counter & rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
  
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
 if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
  }
 

{code}
so i think if (!(node instanceof InnerNode)) should be removed.  Please correct 
me if i was wrong. Thank you.

 

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hddong commented on issue #1614: HADOOP-16615. Add password check for credential provider

2019-10-17 Thread GitBox
hddong commented on issue #1614: HADOOP-16615. Add password check for 
credential provider
URL: https://github.com/apache/hadoop/pull/1614#issuecomment-543434742
 
 
   retest this, please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16598) Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate protobuf classes" to all active branches

2019-10-17 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954173#comment-16954173
 ] 

Michael Stack commented on HADOOP-16598:


I tried the 3.2 patch by applying it, cleaning the world, moving protoc out of 
my PATH and then building. It got this far before it started looking for protoc:

{code}
[INFO] --- hadoop-maven-plugins:3.2.2-SNAPSHOT:protoc (compile-protoc) @ 
hadoop-yarn-api ---
[WARNING] [protoc, --version] failed: java.io.IOException: Cannot run program 
"protoc": error=2, No such file or directory
[ERROR] stdout
...
{code}

Does hadoop-yarn-api need the pom fix?

> Backport "HADOOP-16558 [COMMON+HDFS] use protobuf-maven-plugin to generate 
> protobuf classes" to all active branches
> ---
>
> Key: HADOOP-16598
> URL: https://issues.apache.org/jira/browse/HADOOP-16598
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16598-branch-2-v1.patch, 
> HADOOP-16598-branch-2-v1.patch, HADOOP-16598-branch-2-v2.patch, 
> HADOOP-16598-branch-2.9-v1.patch, HADOOP-16598-branch-2.9-v1.patch, 
> HADOOP-16598-branch-2.9.patch, HADOOP-16598-branch-2.patch, 
> HADOOP-16598-branch-3.1.patch, HADOOP-16598-branch-3.2.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HADOOP-16600:
---
Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

Pushed to branch-3.1.

Resolving. Thanks for patch [~zhangduo] and to the reviewers.

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.1.4
>
> Attachments: HADOOP-16600-branch-3.1-v1.patch, 
> HADOOP-16600.branch-3.1.v1.patch
>
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> as follow code:
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled

2019-10-17 Thread GitBox
sidseth commented on a change in pull request #1661: HADOOP-16484. S3A to warn 
or fail if S3Guard is disabled
URL: https://github.com/apache/hadoop/pull/1661#discussion_r336260424
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
 ##
 @@ -639,6 +639,14 @@ private Constants() {
   public static final String S3GUARD_METASTORE_DYNAMO
   = "org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore";
 
+  /**
+   * The warn level if S3Guard is disabled.
+   */
+  public static final String S3GUARD_DISABLED_WARN_LEVEL
+  = "org.apache.hadoop.fs.s3a.s3guard.disabled_warn_level";
+  public static final String DEFAULT_S3GUARD_DISABLED_WARN_LEVEL =
 
 Review comment:
   Default to SILENT? Let deployments configure this however they want to.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sidseth commented on a change in pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled

2019-10-17 Thread GitBox
sidseth commented on a change in pull request #1661: HADOOP-16484. S3A to warn 
or fail if S3Guard is disabled
URL: https://github.com/apache/hadoop/pull/1661#discussion_r336260320
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 ##
 @@ -1553,6 +1553,18 @@
   
 
 
+
 
 Review comment:
   Entry in core-default.xml is avoidable. This can be documented via javadoc. 
(Every new entry in core-default adds to the Configuration object memory 
overhead)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-10-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16152:
-
Release Note: 
Upgraded Jetty to 9.4.20.v20190813.
Downstream applications that still depend on Jetty 9.3.x may break.
Target Version/s: 3.3.0, 3.1.4, 3.2.2  (was: 3.3.0)

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16152.002.patch, HADOOP-16152.002.patch, 
> HADOOP-16152.003.patch, HADOOP-16152.004.patch, HADOOP-16152.005.patch, 
> HADOOP-16152.006.patch, HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16466) Clean up the Assert usage in tests

2019-10-17 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953942#comment-16953942
 ] 

Ayush Saxena commented on HADOOP-16466:
---

Thanx [~fengnanli] for the patch.
Overall Looks Good.
I think

{code:java}
121 assertTrue(null != dtSecretManager.retrievePassword(identifier));
362 assertTrue(null != dtSecretManager.retrievePassword(identifier));

{code}

These can be changed to {{assertNotNull()}} and since this is handling only one 
class we can change the description accordingly denoting the test class.

> Clean up the Assert usage in tests
> --
>
> Key: HADOOP-16466
> URL: https://issues.apache.org/jira/browse/HADOOP-16466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HADOOP-16466.001.patch
>
>
> This tickets started with https://issues.apache.org/jira/browse/HDFS-14449 
> and we would like to clean up all of the Assert usage in tests to make the 
> repo cleaner. This mainly is to make use static imports for the Assert 
> functions and use function call without the *Assert.* explicitly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #1586: HDDS-2240. Command line tool for OM HA.

2019-10-17 Thread GitBox
anuengineer commented on a change in pull request #1586: HDDS-2240. Command 
line tool for OM HA.
URL: https://github.com/apache/hadoop/pull/1586#discussion_r336122720
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -1097,11 +1097,34 @@ message UpdateGetS3SecretRequest {
 required string awsSecret = 2;
 }
 
+message OMServiceId {
+required string serviceID = 1;
+}
+
+/**
+  This proto is used to define the OM node Id and its ratis server state.
+*/
+message RoleInfo {
+required string omNodeID = 1;
+required string ratisServerRole = 2;
+}
+
+/**
+  This is used to get the Server States of OMs.
+*/
+message ServiceState {
+repeated RoleInfo roleInfos = 1;
+}
+
 /**
  The OM service that takes care of Ozone namespace.
 */
 service OzoneManagerService {
 // A client-to-OM RPC to send client requests to OM Ratis server
 rpc submitRequest(OMRequest)
   returns(OMResponse);
+
+// A client-to-OM RPC to get ratis server states of OMs
+rpc getServiceState(OMServiceId)
+  returns(ServiceState);
 }
 
 Review comment:
   Sorry did not see this comment. How can a client communicate to OM? Does it 
not need to send the request to the leader ? 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16652:
---
Fix Version/s: (was: 2.10.0)
   2.11.0

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.11.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953900#comment-16953900
 ] 

Jonathan Hung commented on HADOOP-16652:


branch-2 is currently 2.11.0 since we cut branch-2.10. Changing fix version.

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-16652:
--
Fix Version/s: (was: 2.0)
   2.10.0

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Bilahari T H (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953894#comment-16953894
 ] 

Bilahari T H commented on HADOOP-16652:
---

Done

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.10.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled

2019-10-17 Thread GitBox
hadoop-yetus commented on issue #1661: HADOOP-16484. S3A to warn or fail if 
S3Guard is disabled
URL: https://github.com/apache/hadoop/pull/1661#issuecomment-543251581
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 2103 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1089 | trunk passed |
   | +1 | compile | 1019 | trunk passed |
   | +1 | checkstyle | 161 | trunk passed |
   | +1 | mvnsite | 138 | trunk passed |
   | +1 | shadedclient | 1096 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | trunk passed |
   | 0 | spotbugs | 72 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 196 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 83 | the patch passed |
   | +1 | compile | 970 | the patch passed |
   | +1 | javac | 970 | the patch passed |
   | -0 | checkstyle | 159 | root: The patch generated 1 new + 14 unchanged - 0 
fixed = 15 total (was 14) |
   | +1 | mvnsite | 135 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 743 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 133 | the patch passed |
   | +1 | findbugs | 208 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 561 | hadoop-common in the patch failed. |
   | +1 | unit | 98 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 9207 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
   |   | hadoop.fs.shell.TestCopyFromLocal |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1661 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux a99c1307be78 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3990ffa |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/2/testReport/ |
   | Max. process+thread count | 1606 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled

2019-10-17 Thread GitBox
hadoop-yetus commented on issue #1661: HADOOP-16484. S3A to warn or fail if 
S3Guard is disabled
URL: https://github.com/apache/hadoop/pull/1661#issuecomment-543206209
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1800 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 72 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1091 | trunk passed |
   | +1 | compile | 1026 | trunk passed |
   | +1 | checkstyle | 161 | trunk passed |
   | +1 | mvnsite | 137 | trunk passed |
   | +1 | shadedclient | 1113 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | trunk passed |
   | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 195 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 83 | the patch passed |
   | +1 | compile | 965 | the patch passed |
   | +1 | javac | 965 | the patch passed |
   | -0 | checkstyle | 159 | root: The patch generated 1 new + 14 unchanged - 0 
fixed = 15 total (was 14) |
   | +1 | mvnsite | 136 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 766 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 134 | the patch passed |
   | +1 | findbugs | 212 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 521 | hadoop-common in the patch failed. |
   | +1 | unit | 95 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 8908 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1661 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 69b074cd44a9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3990ffa |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/testReport/ |
   | Max. process+thread count | 1440 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1661/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1655: HADOOP-16629: support copyFile in s3afilesystem

2019-10-17 Thread GitBox
bgaborg commented on a change in pull request #1655: HADOOP-16629: support 
copyFile in s3afilesystem
URL: https://github.com/apache/hadoop/pull/1655#discussion_r336041063
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CopyOperation.java
 ##
 @@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import org.apache.hadoop.fs.FileAlreadyExistsException;
 
 Review comment:
   Please use the following import order:
   ```
   java.*
   javax.*
   --
   every but org.apache
   
   --
   org,apache.*
   --
   static stuff, in order
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on a change in pull request #1655: HADOOP-16629: support copyFile in s3afilesystem

2019-10-17 Thread GitBox
bgaborg commented on a change in pull request #1655: HADOOP-16629: support 
copyFile in s3afilesystem
URL: https://github.com/apache/hadoop/pull/1655#discussion_r336040359
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractCopy.java
 ##
 @@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.contract.s3a;
+
+import org.apache.hadoop.conf.Configuration;
 
 Review comment:
   Please use the following import order:
   ```
   java.*
   javax.*
   --
   every but org.apache
   
   --
   org,apache.*
   --
   static stuff, in order
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem

2019-10-17 Thread GitBox
bgaborg commented on issue #1591: HADOOP-16629: support copyFile in 
s3afilesystem
URL: https://github.com/apache/hadoop/pull/1591#issuecomment-543201386
 
 
   can we close this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #1661: HADOOP-16484. S3A to warn or fail if S3Guard is disabled

2019-10-17 Thread GitBox
bgaborg opened a new pull request #1661: HADOOP-16484. S3A to warn or fail if 
S3Guard is disabled
URL: https://github.com/apache/hadoop/pull/1661
 
 
   Change-Id: I368ec8d0395dd15272a8ed718823992d472028f3
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2019-10-17 Thread Mate Szalay-Beko (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mate Szalay-Beko updated HADOOP-16579:
--
Description: 
*Update:* the original idea was to only update Curator but keep the old 
ZooKeeper version in Hadoop. However, we encountered some run-time 
backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
3.5.5. We haven't really investigated deeply these issues, but upgraded to 
ZooKeeper 3.5.5 (and later to 3.5.6). We had to do some minor fixes in the unit 
tests (and also had to change some deprecated Curator API calls), but [the 
latest PR|https://github.com/apache/hadoop/pull/1656] seems to be stable.

ZooKeeper 3.5.6 just got released during our work. (I think the official 
announcement will get out maybe tomorrow, but it is already available in maven 
central or on the [Apache ZooKeeper ftp 
site|https://www-eu.apache.org/dist/zookeeper/]). It is considered to be a 
stable version, contains some minor fixes and improvements, plus some CVE 
fixes. See the [release 
notes|https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md].

 

Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
 and [in Ozone we use Curator 
2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].

Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 3.x 
is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, the 
latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 3.5.x. 
(see [the relevant Curator 
page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
components are doing it right now (e.g. Hive).

*The aims of this task are* to:
 - change Curator version in Hadoop to the latest stable 4.x version (currently 
4.2.0)
 - also make sure we don't have multiple ZooKeeper versions in the classpath to 
avoid runtime problems (it is 
[recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
ZooKeeper which come with Curator, so that there will be only a single 
ZooKeeper version used runtime in Hadoop)

In this ticket we still don't want to change the default ZooKeeper version in 
Hadoop, we only want to make it possible for the community to be able to build 
/ use Hadoop with the new ZooKeeper (e.g. if they need to secure the ZooKeeper 
communication with SSL, what is only supported in the new ZooKeeper version). 
Upgrading to Curator 4.x should keep Hadoop to be compatible with both 
ZooKeeper 3.4 and 3.5.

  was:
*Update:* the original idea was to only update Curator but keep the old 
ZooKeeper version in Hadoop. However, we encountered some run-time 
backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
3.5.5. We haven't really investigated these issues, but upgraded to ZooKeeper 
3.5.5 (and later to 3.5.6). We had to do some minor fixes in the unit tests 
(and also had to change some deprecated Curator API calls), but the latest PR 
seems to be stable.

ZooKeeper 3.5.6 just got released during our work. (I think the official 
announcement will get out maybe tomorrow, but it is already available in maven 
central or on the [apache zookeeper ftp 
site|[https://www-eu.apache.org/dist/zookeeper/]]). It is considered to be a 
stable version, contains some minor fixes and improvements, plus some CVE 
fixes. See the release notes: 
[https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md]

 

Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 

[jira] [Updated] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2019-10-17 Thread Mate Szalay-Beko (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mate Szalay-Beko updated HADOOP-16579:
--
Description: 
*Update:* the original idea was to only update Curator but keep the old 
ZooKeeper version in Hadoop. However, we encountered some run-time 
backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
3.5.5. We haven't really investigated these issues, but upgraded to ZooKeeper 
3.5.5 (and later to 3.5.6). We had to do some minor fixes in the unit tests 
(and also had to change some deprecated Curator API calls), but the latest PR 
seems to be stable.

ZooKeeper 3.5.6 just got released during our work. (I think the official 
announcement will get out maybe tomorrow, but it is already available in maven 
central or on the [apache zookeeper ftp 
site|[https://www-eu.apache.org/dist/zookeeper/]]). It is considered to be a 
stable version, contains some minor fixes and improvements, plus some CVE 
fixes. See the release notes: 
[https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md]

 

Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
 and [in Ozone we use Curator 
2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].

Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 3.x 
is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, the 
latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 3.5.x. 
(see [the relevant Curator 
page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
components are doing it right now (e.g. Hive).

*The aims of this task are* to:
 - change Curator version in Hadoop to the latest stable 4.x version (currently 
4.2.0)
 - also make sure we don't have multiple ZooKeeper versions in the classpath to 
avoid runtime problems (it is 
[recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
ZooKeeper which come with Curator, so that there will be only a single 
ZooKeeper version used runtime in Hadoop)

In this ticket we still don't want to change the default ZooKeeper version in 
Hadoop, we only want to make it possible for the community to be able to build 
/ use Hadoop with the new ZooKeeper (e.g. if they need to secure the ZooKeeper 
communication with SSL, what is only supported in the new ZooKeeper version). 
Upgrading to Curator 4.x should keep Hadoop to be compatible with both 
ZooKeeper 3.4 and 3.5.

  was:
*Update:* the original idea was to only update Curator but keep the old 
ZooKeeper version in Hadoop. However, we encountered some run-time 
backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
3.5.5. We haven't really investigated these issues, but upgraded to ZooKeeper 
3.5.5 (and later to 3.5.6). We had to do some minor fixes in the unit tests 
(and also had to change some deprecated Curator API calls), but the latest PR 
seems to be stable.

ZooKeeper 3.5.6 just got released. (I think the official announcement will get 
out maybe tomorrow, but it is already available in maven central or on the 
apache zookeeper ftp site). It is considered to be a stable version, contains 
some minor fixes and improvements, plus some CVE fixes. See the release note: 
[https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md]

 

Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
 and [in Ozone 

[jira] [Updated] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2019-10-17 Thread Mate Szalay-Beko (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mate Szalay-Beko updated HADOOP-16579:
--
Description: 
*Update:* the original idea was to only update Curator but keep the old 
ZooKeeper version in Hadoop. However, we encountered some run-time 
backward-incompatibility during unit tests with Curator 4.2.0 and ZooKeeper 
3.5.5. We haven't really investigated these issues, but upgraded to ZooKeeper 
3.5.5 (and later to 3.5.6). We had to do some minor fixes in the unit tests 
(and also had to change some deprecated Curator API calls), but the latest PR 
seems to be stable.

ZooKeeper 3.5.6 just got released. (I think the official announcement will get 
out maybe tomorrow, but it is already available in maven central or on the 
apache zookeeper ftp site). It is considered to be a stable version, contains 
some minor fixes and improvements, plus some CVE fixes. See the release note: 
[https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md]

 

Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
 and [in Ozone we use Curator 
2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].

Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 3.x 
is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, the 
latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 3.5.x. 
(see [the relevant Curator 
page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
components are doing it right now (e.g. Hive).

*The aims of this task are* to:
 - change Curator version in Hadoop to the latest stable 4.x version (currently 
4.2.0)
 - also make sure we don't have multiple ZooKeeper versions in the classpath to 
avoid runtime problems (it is 
[recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
ZooKeeper which come with Curator, so that there will be only a single 
ZooKeeper version used runtime in Hadoop)

In this ticket we still don't want to change the default ZooKeeper version in 
Hadoop, we only want to make it possible for the community to be able to build 
/ use Hadoop with the new ZooKeeper (e.g. if they need to secure the ZooKeeper 
communication with SSL, what is only supported in the new ZooKeeper version). 
Upgrading to Curator 4.x should keep Hadoop to be compatible with both 
ZooKeeper 3.4 and 3.5.

  was:
Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
 and [in Ozone we use Curator 
2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].

Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 3.x 
is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, the 
latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 3.5.x. 
(see [the relevant Curator 
page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
components are doing it right now (e.g. Hive).

*The aims of this task are* to:
 - change Curator version in Hadoop to the latest stable 4.x version (currently 
4.2.0)
 - also make sure we don't have multiple ZooKeeper versions in the classpath to 
avoid runtime problems (it is 
[recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
ZooKeeper which come with Curator, so that there will be only a single 

[jira] [Updated] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop

2019-10-17 Thread Mate Szalay-Beko (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mate Szalay-Beko updated HADOOP-16579:
--
Summary: Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop  
(was: Upgrade to Apache Curator 4.2.0 in Hadoop)

> Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.6 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] symat commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5

2019-10-17 Thread GitBox
symat commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and 
ZooKeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1656#issuecomment-543138047
 
 
   Looks like the build succeeded (the only Findbugs issue was introduced by an 
independent recent commit of YARN-9773)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16424) S3Guard fsck: Check internal consistency of the MetadataStore

2019-10-17 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953649#comment-16953649
 ] 

Gabor Bota commented on HADOOP-16424:
-

Our code should not be creating orphan entries. If we have an orphan entry than 
it's a bug in the production code.


> S3Guard fsck: Check internal consistency of the MetadataStore
> -
>
> Key: HADOOP-16424
> URL: https://issues.apache.org/jira/browse/HADOOP-16424
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The internal consistency should be checked e.g for orphaned entries which can 
> cause trouble in runtime and testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16424) S3Guard fsck: Check internal consistency of the MetadataStore

2019-10-17 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898870#comment-16898870
 ] 

Gabor Bota edited comment on HADOOP-16424 at 10/17/19 11:35 AM:


Tasks to do here: 
* find orphan entries (entries without a parent)
* find if a file's parent is not a directory (so the parent is a file)
* warn: no lastUpdated field
* entries where the parent is a tombstone



was (Author: gabor.bota):
Tasks to do here: 
* find orphan entries (entries without a parent)
* find if a file's parent is not a directory (so the parent is a file)
* warn: no lastUpdated field

> S3Guard fsck: Check internal consistency of the MetadataStore
> -
>
> Key: HADOOP-16424
> URL: https://issues.apache.org/jira/browse/HADOOP-16424
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The internal consistency should be checked e.g for orphaned entries which can 
> cause trouble in runtime and testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5

2019-10-17 Thread GitBox
hadoop-yetus commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 
and ZooKeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1656#issuecomment-543130842
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1100 | trunk passed |
   | +1 | compile | 1077 | trunk passed |
   | +1 | checkstyle | 170 | trunk passed |
   | +1 | mvnsite | 319 | trunk passed |
   | +1 | shadedclient | 1283 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 275 | trunk passed |
   | 0 | spotbugs | 114 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | 0 | findbugs | 35 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   | -1 | findbugs | 112 | 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant findbugs warnings. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 34 | Maven dependency ordering for patch |
   | +1 | mvninstall | 198 | the patch passed |
   | +1 | compile | 1021 | the patch passed |
   | +1 | javac | 1021 | the patch passed |
   | -0 | checkstyle | 173 | root: The patch generated 26 new + 553 unchanged - 
6 fixed = 579 total (was 559) |
   | +1 | mvnsite | 313 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 761 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 287 | the patch passed |
   | 0 | findbugs | 32 | hadoop-project has no data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 30 | hadoop-project in the patch passed. |
   | +1 | unit | 201 | hadoop-auth in the patch passed. |
   | +1 | unit | 575 | hadoop-common in the patch passed. |
   | +1 | unit | 82 | hadoop-registry in the patch passed. |
   | +1 | unit | 168 | hadoop-yarn-server-common in the patch passed. |
   | +1 | unit | 5025 | hadoop-yarn-server-resourcemanager in the patch passed. 
|
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 14072 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1656 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 07449dc4efb8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3990ffa |
   | Default Java | 1.8.0_222 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/4/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/4/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/4/testReport/ |
   | Max. process+thread count | 1395 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-auth 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-registry 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2

2019-10-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953631#comment-16953631
 ] 

Steve Loughran commented on HADOOP-16652:
-

not 2.0, clearly. 2.10?

> Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to 
> branch-2
> --
>
> Key: HADOOP-16652
> URL: https://issues.apache.org/jira/browse/HADOOP-16652
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
> Fix For: 2.0
>
>
> Make AAD endpoint configurable on all Auth flows



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16601) Add support for hardware crc32 of nativetask checksums on aarch64 arch

2019-10-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953630#comment-16953630
 ] 

Steve Loughran commented on HADOOP-16601:
-

* it's best if you file a github PR with this JIRA at the start of the title
* Look at the Yetus complaints about style, swap tabs for spaces

we really love unit tests, and new patches to improve that coverage

* you MUST run the native tests on arm64 and give us the results; yetus won't 
be testing your code
* and if you can think of new tests to put into test_bulk_crc32.c, that'd be 
good too. Are there ways to break things? 

> Add support for hardware crc32 of nativetask checksums on aarch64 arch
> --
>
> Key: HADOOP-16601
> URL: https://issues.apache.org/jira/browse/HADOOP-16601
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.1
>Reporter: MacChen01
>Priority: Major
>  Labels: performance
> Fix For: site
>
> Attachments: HADOOP-16601.patch
>
>
> Add support for aarch64 CRC instructions in nativetask module, optimize the 
> CRC32 and CRC32C.
> Use the benchmark tools : nttest , the improvement is quite substantial:  
> *CRC32 Zlib polynomial 0x04C11DB7*
> |KeyValueType-IO|Before(MB/s)|After(MB/s)|Improvement|
> |TextType-Write|425.98|602.92|+42%|
> |TextType-Read|796.06|1716.59|+116%|
> |BytesType-Write|474.25|686.84|+45%|
> |BytesType-Read|844.96|1955.03|+131%|
> |UnknownType-Write|434.84|608.81|+40%|
> |UnknownType-Read|805.76|1733.82|+115%|
>  
>   
>  *CRC32C  Castagnoli polynomial 0x1EDC6F41*
>  
> |KeyValueType-IO|Before(MB/s)|After(MB/s)|Improvement|
> |TextType-Write|423.39|606.55|+43%|
> |TextType-Read|799.20|1783.28|+123%|
> |BytesType-Write|473.95|696.47|+47%|
> |BytesType-Read|846.30|2018.06|+138%|
> |UnknownType-Write|434.07|612.31|+41%|
> |UnknownType-Read|807.16|1783.95|+121%|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16658) S3A connector does not support including the token renewer in the token identifier

2019-10-17 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953530#comment-16953530
 ] 

Steve Loughran commented on HADOOP-16658:
-

phil, create a github PR and link to this JIRA by putting the JIRA ID in the 
title. thanks

> S3A connector does not support including the token renewer in the token 
> identifier
> --
>
> Key: HADOOP-16658
> URL: https://issues.apache.org/jira/browse/HADOOP-16658
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-aws
>Affects Versions: 3.3.0
>Reporter: Philip Zampino
>Priority: Major
>
> To support management of delegation token expirations by way of the Yarn 
> TokenRenewer facility, delegation token identifiers MUST include a valid 
> renewer or the associated TokenRenewer implementation will be ignored.
> Currently, the renewer isn't propagated to the bindings for token creation, 
> which means the tokens can't ever have the renewer set on them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16657) Move remaining log4j APIs over to slf4j in hadoop-common.

2019-10-17 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated HADOOP-16657:
---
Description: There are some remaining classes where log4j's APIs are still 
being used. Created this Jira to move them to log4j2.

> Move remaining log4j APIs over to slf4j in hadoop-common.
> -
>
> Key: HADOOP-16657
> URL: https://issues.apache.org/jira/browse/HADOOP-16657
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Minni Mittal
>Priority: Major
>
> There are some remaining classes where log4j's APIs are still being used. 
> Created this Jira to move them to log4j2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16657) Move remaining log4j APIs over to slf4j in hadoop-common.

2019-10-17 Thread Abhishek Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated HADOOP-16657:
---
Summary: Move remaining log4j APIs over to slf4j in hadoop-common.  (was: 
Move log4j APIs over to slf4j in hadoop-common - Part-2)

> Move remaining log4j APIs over to slf4j in hadoop-common.
> -
>
> Key: HADOOP-16657
> URL: https://issues.apache.org/jira/browse/HADOOP-16657
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Minni Mittal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] symat commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5

2019-10-17 Thread GitBox
symat commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and 
ZooKeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1656#issuecomment-543050016
 
 
   ZooKeeper 3.5.6 just got released, I gave it a try. If something goes wrong, 
I will revert. 
   
   I don't think the official announcement has been made, but the new ZooKeeper 
is already available on Maven Central. It contains some minor fixes and 
improvements, plus some CVE fixes. See the release note: 
https://github.com/apache/zookeeper/blob/branch-3.5.6/zookeeper-docs/src/main/resources/markdown/releasenotes.md


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org