[jira] [Work logged] (HDFS-15940) Some tests in TestBlockRecovery are consistently failing

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15940?focusedWorklogId=576523=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576523
 ]

ASF GitHub Bot logged work on HDFS-15940:
-

Author: ASF GitHub Bot
Created on: 04/Apr/21 05:20
Start Date: 04/Apr/21 05:20
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2844:
URL: https://github.com/apache/hadoop/pull/2844#discussion_r606748973



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery2.java
##
@@ -0,0 +1,464 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.ha.HAServiceProtocol;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.StripedFileTestUtil;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeCommand;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
+import org.apache.hadoop.hdfs.server.protocol.HeartbeatResponse;
+import org.apache.hadoop.hdfs.server.protocol.NNHAStatusHeartbeat;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
+import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.AutoCloseableLock;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.event.Level;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.net.InetSocketAddress;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static org.apache.hadoop.hdfs.DFSConfigKeys
+.DFS_BLOCK_SIZE_KEY;

Review comment:
   Ahh, my bad. Will fix it.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery1.java
##
@@ -122,9 +108,9 @@
 /**
  * This tests if sync all replicas in block recovery works correctly.
  */
-public class TestBlockRecovery {
+public class TestBlockRecovery1 {

Review comment:
   Sure, sounds good. Will update.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery1.java
##
@@ -336,191 +322,194 @@ public void testFinalizedReplicas () throws IOException 
{
 if(LOG.isDebugEnabled()) {
   LOG.debug("Running " + GenericTestUtils.getMethodName());
 }
-ReplicaRecoveryInfo replica1 = new ReplicaRecoveryInfo(BLOCK_ID, 
-REPLICA_LEN1, GEN_STAMP-1, ReplicaState.FINALIZED);
-ReplicaRecoveryInfo replica2 = new ReplicaRecoveryInfo(BLOCK_ID, 
-

[jira] [Work logged] (HDFS-15930) Fix some @param errors in DirectoryScanner.

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15930?focusedWorklogId=576517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576517
 ]

ASF GitHub Bot logged work on HDFS-15930:
-

Author: ASF GitHub Bot
Created on: 04/Apr/21 03:30
Start Date: 04/Apr/21 03:30
Worklog Time Spent: 10m 
  Work Description: qizhu-lucas commented on pull request #2829:
URL: https://github.com/apache/hadoop/pull/2829#issuecomment-812964703


   Thanks  @ayushtkn for commit and review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576517)
Time Spent: 1.5h  (was: 1h 20m)

> Fix some @param errors in DirectoryScanner.
> ---
>
> Key: HDFS-15930
> URL: https://issues.apache.org/jira/browse/HDFS-15930
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15933) distcp geo hdfs site is not working

2021-04-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-15933.
-
Resolution: Not A Bug

> distcp geo hdfs site is not working
> ---
>
> Key: HDFS-15933
> URL: https://issues.apache.org/jira/browse/HDFS-15933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.1.4
> Environment: Linux Redhat - RHEL 8,0
>Reporter: Suresh
>Priority: Blocker
> Attachments: GEO_Cluster.jpg
>
>
> I am facing some issues in deploying HDFS at docker swarm network across geo 
> sites.
> “distcp” command which is running at a HDFS datanodes container is not able 
> to pull the data from one site to another site(which is set to run at another 
> docker swarm network). It expects the docker swarm, HDFS datanodes to be in 
> host network configuration other than overlay(with ip forwarding), bridge 
> network.
> We want to know how the shipment of docker based HDFS deployment would look 
> like, how distcp command would get invoked across docker instances sitting at 
> different Geo’s
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17314367#comment-17314367
 ] 

Ayush Saxena commented on HDFS-15923:
-

The permission issue, we should fix ASAP, That would be a security issue if the 
code gets released.
[~zhengzhuobinzzb] do you plan to chase that, if not [~LiJinglun] any bandwidth 
to get that sorted?

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> 

[jira] [Work logged] (HDFS-15934) Make DirectoryScanner reconcile blocks batch size and interval between batch configurable.

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15934?focusedWorklogId=576499=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576499
 ]

ASF GitHub Bot logged work on HDFS-15934:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 21:50
Start Date: 03/Apr/21 21:50
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2833:
URL: https://github.com/apache/hadoop/pull/2833#discussion_r606714508



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
##
@@ -873,6 +873,22 @@
   
 
 
+
+  dfs.datanode.reconcile.blocks.batch.size
+  1000
+  Run reconcile to checkAndUpdate with batch,

Review comment:
   can you add some recheck the descriptions for both the configs, the 
first line in. both is same, and doesn't make sense to me, recheck once.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
##
@@ -316,6 +317,18 @@ public DirectoryScanner(FsDatasetSpi dataset, 
Configuration conf) {
 
 masterThread =
 new ScheduledThreadPoolExecutor(1, new Daemon.DaemonFactory());
+
+reconcileBlocksBatchSize =
+conf.getInt(DFSConfigKeys.
+DFS_DATANODE_RECONCILE_BLOCKS_BATCH_SIZE,
+DFSConfigKeys.
+DFS_DATANODE_RECONCILE_BLOCKS_BATCH_SIZE_DEFAULT);
+
+reconcileBlocksBatchInterval =
+conf.getInt(DFSConfigKeys.
+DFS_DATANODE_RECONCILE_BLOCKS_BATCH_INTERVAL,
+DFSConfigKeys.
+DFS_DATANODE_RECONCILE_BLOCKS_BATCH_INTERVAL_DEFAULT);

Review comment:
   Add a validation for these configs, if ``reconcileBlocksBatchSize`` and 
``reconcileBlocksBatchInterval`` is less than one use default. and add a warn 
log message if these values are incorrect something like:
   Invalid value configured for < config name>, should be greater than 0, Using 
default.
   
   In the end add an Info log for the values being used.
   

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -836,6 +836,10 @@
   public static final int DFS_DATANODE_DIRECTORYSCAN_INTERVAL_DEFAULT = 
21600;
   public static final String  DFS_DATANODE_DIRECTORYSCAN_THREADS_KEY = 
"dfs.datanode.directoryscan.threads";
   public static final int DFS_DATANODE_DIRECTORYSCAN_THREADS_DEFAULT = 1;
+  public static final String  DFS_DATANODE_RECONCILE_BLOCKS_BATCH_SIZE = 
"dfs.datanode.reconcile.blocks.batch.size";
+  public static final int DFS_DATANODE_RECONCILE_BLOCKS_BATCH_SIZE_DEFAULT 
= 1000;
+  public static final String  DFS_DATANODE_RECONCILE_BLOCKS_BATCH_INTERVAL = 
"dfs.datanode.reconcile.blocks.batch.interval";
+  public static final int 
DFS_DATANODE_RECONCILE_BLOCKS_BATCH_INTERVAL_DEFAULT = 2000;

Review comment:
   Can we add support for time units?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576499)
Time Spent: 50m  (was: 40m)

> Make DirectoryScanner reconcile blocks batch size and interval between batch 
> configurable.
> --
>
> Key: HDFS-15934
> URL: https://issues.apache.org/jira/browse/HDFS-15934
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDFS-14476 Make this batch to avoid lock too much time, but different cluster 
> has different demand, we should make batch size and batch interval 
> configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15930) Fix some @param errors in DirectoryScanner.

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15930?focusedWorklogId=576498=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576498
 ]

ASF GitHub Bot logged work on HDFS-15930:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 21:42
Start Date: 03/Apr/21 21:42
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2829:
URL: https://github.com/apache/hadoop/pull/2829#issuecomment-812929534


   Thanx @qizhu-lucas for the contribution, @Hexiaoqiao for the review.
   
   @qizhu-lucas, I missed your message, sorry.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576498)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix some @param errors in DirectoryScanner.
> ---
>
> Key: HDFS-15930
> URL: https://issues.apache.org/jira/browse/HDFS-15930
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15930) Fix some @param errors in DirectoryScanner.

2021-04-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15930:

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanx [~zhuqi] for the contribution.

> Fix some @param errors in DirectoryScanner.
> ---
>
> Key: HDFS-15930
> URL: https://issues.apache.org/jira/browse/HDFS-15930
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15930) Fix some @param errors in DirectoryScanner.

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15930?focusedWorklogId=576497=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576497
 ]

ASF GitHub Bot logged work on HDFS-15930:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 21:41
Start Date: 03/Apr/21 21:41
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #2829:
URL: https://github.com/apache/hadoop/pull/2829


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576497)
Time Spent: 1h 10m  (was: 1h)

> Fix some @param errors in DirectoryScanner.
> ---
>
> Key: HDFS-15930
> URL: https://issues.apache.org/jira/browse/HDFS-15930
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Qi Zhu
>Assignee: Qi Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15940) Some tests in TestBlockRecovery are consistently failing

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15940?focusedWorklogId=576494=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576494
 ]

ASF GitHub Bot logged work on HDFS-15940:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 21:37
Start Date: 03/Apr/21 21:37
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2844:
URL: https://github.com/apache/hadoop/pull/2844#discussion_r606712644



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery1.java
##
@@ -336,191 +322,194 @@ public void testFinalizedReplicas () throws IOException 
{
 if(LOG.isDebugEnabled()) {
   LOG.debug("Running " + GenericTestUtils.getMethodName());
 }
-ReplicaRecoveryInfo replica1 = new ReplicaRecoveryInfo(BLOCK_ID, 
-REPLICA_LEN1, GEN_STAMP-1, ReplicaState.FINALIZED);
-ReplicaRecoveryInfo replica2 = new ReplicaRecoveryInfo(BLOCK_ID, 
-REPLICA_LEN1, GEN_STAMP-2, ReplicaState.FINALIZED);
+ReplicaRecoveryInfo replica1 = new ReplicaRecoveryInfo(BLOCK_ID,
+REPLICA_LEN1, GEN_STAMP - 1, ReplicaState.FINALIZED);
+ReplicaRecoveryInfo replica2 = new ReplicaRecoveryInfo(BLOCK_ID,
+REPLICA_LEN1, GEN_STAMP - 2, ReplicaState.FINALIZED);

Review comment:
   Can we chunk out the formatting changes in this file? We can restrict 
ourselves to only the intended changes, Usually we aren't mixing checkstyle 
fixes with the other code changes..

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery1.java
##
@@ -122,9 +108,9 @@
 /**
  * This tests if sync all replicas in block recovery works correctly.
  */
-public class TestBlockRecovery {
+public class TestBlockRecovery1 {

Review comment:
   Lets not bother the name, Let it be as is.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery2.java
##
@@ -0,0 +1,464 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.datanode;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.ha.HAServiceProtocol;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DFSTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.StripedFileTestUtil;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.hdfs.server.protocol.BlockRecoveryCommand;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeCommand;
+import org.apache.hadoop.hdfs.server.protocol.DatanodeRegistration;
+import org.apache.hadoop.hdfs.server.protocol.HeartbeatResponse;
+import org.apache.hadoop.hdfs.server.protocol.NNHAStatusHeartbeat;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
+import org.apache.hadoop.hdfs.server.protocol.NamespaceInfo;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.hadoop.util.AutoCloseableLock;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TestName;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.slf4j.event.Level;
+
+import java.io.File;

[jira] [Work logged] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15920?focusedWorklogId=576491=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576491
 ]

ASF GitHub Bot logged work on HDFS-15920:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 21:15
Start Date: 03/Apr/21 21:15
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2831:
URL: https://github.com/apache/hadoop/pull/2831#discussion_r606710952



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
##
@@ -640,7 +644,18 @@ private void reportStatus(String msg, boolean rightNow) {
*/
   private class SafeModeMonitor implements Runnable {
 /** Interval in msec for checking safe mode. */
-private static final long RECHECK_INTERVAL = 1000;
+private long recheckInterval;
+
+public SafeModeMonitor(Configuration conf) {
+  recheckInterval = conf.getLong(
+  DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY,
+  DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT);
+  if (recheckInterval < 1) {
+LOG.warn("The current value of recheckInterval is {}, " +
+"this variable should be a positive number.", recheckInterval);

Review comment:
   Can you change the message as ```"Invalid value for " + 
DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY + ". Should be greater than 0, but 
is {}", recheckInterval```
   Please correct the syntax
   
   

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManagerSafeMode.java
##
@@ -640,7 +644,18 @@ private void reportStatus(String msg, boolean rightNow) {
*/
   private class SafeModeMonitor implements Runnable {
 /** Interval in msec for checking safe mode. */
-private static final long RECHECK_INTERVAL = 1000;
+private long recheckInterval;
+
+public SafeModeMonitor(Configuration conf) {
+  recheckInterval = conf.getLong(
+  DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY,
+  DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT);
+  if (recheckInterval < 1) {
+LOG.warn("The current value of recheckInterval is {}, " +
+"this variable should be a positive number.", recheckInterval);
+recheckInterval = DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT;
+  }

Review comment:
   Add a Log.info("Using {} as SafeModeMonitor Interval", recheckInterval)

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManagerSafeMode.java
##
@@ -230,6 +232,50 @@ public void testCheckSafeMode8() throws Exception {
 assertEquals(BMSafeModeStatus.OFF, getSafeModeStatus());
   }
 
+  @Test(timeout = 2)
+  public void testCheckSafeMode9() throws Exception {
+Configuration conf = new HdfsConfiguration();
+try {
+  conf.setLong(DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY, 3000);
+  bm = spy(new BlockManager(fsn, false, conf));
+  doReturn(true).when(bm).isGenStampInFuture(any(Block.class));
+  dn = spy(bm.getDatanodeManager());
+  Whitebox.setInternalState(bm, "datanodeManager", dn);
+  // the datanode threshold is always met
+  when(dn.getNumLiveDataNodes()).thenReturn(DATANODE_NUM);
+  bmSafeMode = new BlockManagerSafeMode(bm, fsn, false, conf);
+  bmSafeMode.activate(BLOCK_TOTAL);
+  Whitebox.setInternalState(bmSafeMode, "extension", Integer.MAX_VALUE);
+  setSafeModeStatus(BMSafeModeStatus.PENDING_THRESHOLD);
+  setBlockSafe(BLOCK_THRESHOLD);
+  bmSafeMode.checkSafeMode();
+
+  assertTrue(bmSafeMode.isInSafeMode());
+  assertEquals(BMSafeModeStatus.EXTENSION, getSafeModeStatus());
+
+  GenericTestUtils.waitFor(new Supplier() {
+@Override
+public Boolean get() {
+  Whitebox.setInternalState(bmSafeMode, "extension", 0);
+  return getSafeModeStatus() != BMSafeModeStatus.EXTENSION;
+}
+  }, EXTENSION / 10, EXTENSION * 10);
+
+  assertFalse(bmSafeMode.isInSafeMode());
+  assertEquals(BMSafeModeStatus.OFF, getSafeModeStatus());
+} finally {
+  conf.setLong(DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_KEY,
+  DFS_NAMENODE_SAFEMODE_RECHECK_INTERVAL_DEFAULT);
+  bm = spy(new BlockManager(fsn, false, conf));
+  doReturn(true).when(bm).isGenStampInFuture(any(Block.class));
+  dn = spy(bm.getDatanodeManager());
+  Whitebox.setInternalState(bm, "datanodeManager", dn);
+  // the datanode threshold is always met
+  when(dn.getNumLiveDataNodes()).thenReturn(DATANODE_NUM);
+  bmSafeMode = new BlockManagerSafeMode(bm, fsn, false, conf);
+}

Review comment:
   The tests have too many warnings due to deprecation. See if you can get 
rid of them, If not, just do a assert 

[jira] [Work logged] (HDFS-15951) Remove unused parameters in NameNodeProxiesClient

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15951?focusedWorklogId=576430=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576430
 ]

ASF GitHub Bot logged work on HDFS-15951:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 13:56
Start Date: 03/Apr/21 13:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2859:
URL: https://github.com/apache/hadoop/pull/2859#issuecomment-812868839


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2859/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2859 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 9fea1b902a73 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9c998ad7344e7a9aca44f24fb18e69451d40 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2859/1/testReport/ |
   | Max. process+thread count | 675 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2859/1/console |
   | 

[jira] [Work logged] (HDFS-15947) Replace deprecated protobuf APIs

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15947?focusedWorklogId=576425=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576425
 ]

ASF GitHub Bot logged work on HDFS-15947:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 13:08
Start Date: 03/Apr/21 13:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2856:
URL: https://github.com/apache/hadoop/pull/2856#issuecomment-812863295






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576425)
Time Spent: 40m  (was: 0.5h)

> Replace deprecated protobuf APIs
> 
>
> Key: HDFS-15947
> URL: https://issues.apache.org/jira/browse/HDFS-15947
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Some protobuf APIs are soon going to get deprecated and must be replaced with 
> newer ones. One of the warnings are reported due to the issue is as follows -
> {code}
> [ 48%] Building CXX object 
> main/native/libhdfspp/tests/CMakeFiles/rpc_engine_test.dir/rpc_engine_test.cc.o
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/rpc_engine_test.cc:
>  In function ‘std::pair std::__cxx11::basic_string > RpcResponse(const 
> hadoop::common::RpcResponseHeaderProto&, const string&, const 
> boost::system::error_code&)’:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/rpc_engine_test.cc:92:56:
>  warning: ‘int google::protobuf::MessageLite::ByteSize() const’ is 
> deprecated: Please use ByteSizeLong() instead [-Wdeprecated-declarations]
>92 |   pbio::CodedOutputStream::VarintSize32(h.ByteSize()) +
>   |^
> In file included from 
> /usr/local/include/google/protobuf/generated_enum_util.h:36,
>  from /usr/local/include/google/protobuf/map.h:49,
>  from 
> /usr/local/include/google/protobuf/generated_message_table_driven.h:34,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/tests/test.pb.h:26,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/rpc_engine_test.cc:22:
> /usr/local/include/google/protobuf/message_lite.h:408:7: note: declared here
>   408 |   int ByteSize() const { return internal::ToIntSize(ByteSizeLong()); }
>   |   ^~~~
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15951) Remove unused parameters in NameNodeProxiesClient

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15951?focusedWorklogId=576422=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576422
 ]

ASF GitHub Bot logged work on HDFS-15951:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 12:22
Start Date: 03/Apr/21 12:22
Worklog Time Spent: 10m 
  Work Description: tomscut opened a new pull request #2859:
URL: https://github.com/apache/hadoop/pull/2859


   JIRA: [HDFS-15951](https://issues.apache.org/jira/browse/HDFS-15951)
   
   Remove unused parameters in org.apache.hadoop.hdfs.NameNodeProxiesClient.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576422)
Remaining Estimate: 0h
Time Spent: 10m

> Remove unused parameters in NameNodeProxiesClient
> -
>
> Key: HDFS-15951
> URL: https://issues.apache.org/jira/browse/HDFS-15951
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remove unused parameters in org.apache.hadoop.hdfs.NameNodeProxiesClient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15951) Remove unused parameters in NameNodeProxiesClient

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15951:
--
Labels: pull-request-available  (was: )

> Remove unused parameters in NameNodeProxiesClient
> -
>
> Key: HDFS-15951
> URL: https://issues.apache.org/jira/browse/HDFS-15951
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remove unused parameters in org.apache.hadoop.hdfs.NameNodeProxiesClient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15951) Remove unused parameters in NameNodeProxiesClient

2021-04-03 Thread tomscut (Jira)
tomscut created HDFS-15951:
--

 Summary: Remove unused parameters in NameNodeProxiesClient
 Key: HDFS-15951
 URL: https://issues.apache.org/jira/browse/HDFS-15951
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: tomscut
Assignee: tomscut


Remove unused parameters in org.apache.hadoop.hdfs.NameNodeProxiesClient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15950) Remove unused hdfs.proto import

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15950?focusedWorklogId=576407=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576407
 ]

ASF GitHub Bot logged work on HDFS-15950:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 09:52
Start Date: 03/Apr/21 09:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2858:
URL: https://github.com/apache/hadoop/pull/2858#issuecomment-812842463


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  53m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  cc  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  cc  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2858/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2858 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit buflint 
bufcompat codespell |
   | uname | Linux 5bd2823b76f8 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2ccebe2fed75dcbff3974d173b70015f9b54311 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2858/1/testReport/ |
   | Max. process+thread count | 514 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2858/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576407)
Time Spent: 0.5h  (was: 20m)

> Remove unused hdfs.proto import
> ---
>
>

[jira] [Work logged] (HDFS-15949) Fix integer overflow

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15949?focusedWorklogId=576403=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576403
 ]

ASF GitHub Bot logged work on HDFS-15949:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 08:18
Start Date: 03/Apr/21 08:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2857:
URL: https://github.com/apache/hadoop/pull/2857#issuecomment-812833238


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   3m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  52m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  cc  |   2m 32s |  |  
hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04
 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 0 new + 8 unchanged 
- 3 fixed = 8 total (was 11)  |
   | +1 :green_heart: |  golang  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  cc  |   2m 38s |  |  
hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new 
+ 8 unchanged - 3 fixed = 8 total (was 11)  |
   | +1 :green_heart: |  golang  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  31m 55s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 119m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2857/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2857 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux b47fa8d047cc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e93e361f757fe8b8ddb9b3c42c225f8ce5783dc3 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2857/1/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2857/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was 

[jira] [Work logged] (HDFS-15950) Remove unused hdfs.proto import

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15950?focusedWorklogId=576402=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576402
 ]

ASF GitHub Bot logged work on HDFS-15950:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 08:13
Start Date: 03/Apr/21 08:13
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #2858:
URL: https://github.com/apache/hadoop/pull/2858#issuecomment-812832629


   This PR fixes a warning reported in PR 
https://github.com/apache/hadoop/pull/2792.
   
   ```
   inotify.proto:35:1: warning: Import hdfs.proto is unused.
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576402)
Time Spent: 20m  (was: 10m)

> Remove unused hdfs.proto import
> ---
>
> Key: HDFS-15950
> URL: https://issues.apache.org/jira/browse/HDFS-15950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hdfs.proto is imported in inotify.proto and is unused. This causes the 
> following warning to be generated -
> {code}
> inotify.proto:35:1: warning: Import hdfs.proto is unused.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15950) Remove unused hdfs.proto import

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15950?focusedWorklogId=576401=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576401
 ]

ASF GitHub Bot logged work on HDFS-15950:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 08:12
Start Date: 03/Apr/21 08:12
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request #2858:
URL: https://github.com/apache/hadoop/pull/2858


   * hdfs.proto is imported in inotify.proto
 and is unused. Thus, removing it.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576401)
Remaining Estimate: 0h
Time Spent: 10m

> Remove unused hdfs.proto import
> ---
>
> Key: HDFS-15950
> URL: https://issues.apache.org/jira/browse/HDFS-15950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hdfs.proto is imported in inotify.proto and is unused. This causes the 
> following warning to be generated -
> {code}
> inotify.proto:35:1: warning: Import hdfs.proto is unused.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15950) Remove unused hdfs.proto import

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15950:
--
Labels: pull-request-available  (was: )

> Remove unused hdfs.proto import
> ---
>
> Key: HDFS-15950
> URL: https://issues.apache.org/jira/browse/HDFS-15950
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hdfs.proto is imported in inotify.proto and is unused. This causes the 
> following warning to be generated -
> {code}
> inotify.proto:35:1: warning: Import hdfs.proto is unused.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15950) Remove unused hdfs.proto import

2021-04-03 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15950:
-

 Summary: Remove unused hdfs.proto import
 Key: HDFS-15950
 URL: https://issues.apache.org/jira/browse/HDFS-15950
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


hdfs.proto is imported in inotify.proto and is unused. This causes the 
following warning to be generated -

{code}
inotify.proto:35:1: warning: Import hdfs.proto is unused.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15949) Fix integer overflow

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15949?focusedWorklogId=576395=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576395
 ]

ASF GitHub Bot logged work on HDFS-15949:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 06:21
Start Date: 03/Apr/21 06:21
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #2857:
URL: https://github.com/apache/hadoop/pull/2857#issuecomment-812820177


   We can observe that `Max int64_t + 1` and `Min int64_t` are equal.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576395)
Time Spent: 40m  (was: 0.5h)

> Fix integer overflow
> 
>
> Key: HDFS-15949
> URL: https://issues.apache.org/jira/browse/HDFS-15949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> There are some instances where integer overflow warnings are reported. Need 
> to fix them.
> {code}
> [ 63%] Building CXX object 
> main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
> In file included from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
>  In member function ‘virtual void 
> hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
> std::numeric_limits::max()+1));
>   |
> ~~~^~
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
> std::numeric_limits::max()+1, std::numeric_limits::max()));
>   | 
> ~~~^~
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15949) Fix integer overflow

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15949?focusedWorklogId=576394=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576394
 ]

ASF GitHub Bot logged work on HDFS-15949:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 06:20
Start Date: 03/Apr/21 06:20
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #2857:
URL: https://github.com/apache/hadoop/pull/2857#issuecomment-812820088


   I was able to verify this change using the following standalone C++ program -
   ```cpp
   #include 
   #include 
   #include 
   
   int main(int argc, char *argv[]) {
 std::cout << "Max int64_t = " << std::numeric_limits::max()
   << std::endl;
 std::cout << "Max int64_t + 1 = " << std::numeric_limits::max() + 
1
   << std::endl;
 std::cout << "Min int64_t = " << std::numeric_limits::min()
   << std::endl;
 return 0;
   }
   ```
   
   Output -
   ```
   Max int64_t = 9223372036854775807
   Max int64_t + 1 = -9223372036854775808
   Min int64_t = -9223372036854775808
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576394)
Time Spent: 0.5h  (was: 20m)

> Fix integer overflow
> 
>
> Key: HDFS-15949
> URL: https://issues.apache.org/jira/browse/HDFS-15949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> There are some instances where integer overflow warnings are reported. Need 
> to fix them.
> {code}
> [ 63%] Building CXX object 
> main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
> In file included from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
>  In member function ‘virtual void 
> hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
> std::numeric_limits::max()+1));
>   |
> ~~~^~
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
> std::numeric_limits::max()+1, std::numeric_limits::max()));
>   | 
> ~~~^~
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15949) Fix integer overflow

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15949?focusedWorklogId=576393=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576393
 ]

ASF GitHub Bot logged work on HDFS-15949:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 06:18
Start Date: 03/Apr/21 06:18
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on pull request #2857:
URL: https://github.com/apache/hadoop/pull/2857#issuecomment-812819946


   This PR fixes some warnings reported as part of 
https://github.com/apache/hadoop/pull/2792.
   
   ```
   [ 63%] Building CXX object 
main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
   In file included from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
   
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
 In member function ‘virtual void hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
   
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
 warning: integer overflow in expression of type ‘long int’ results in 
‘-9223372036854775808’ [-Woverflow]
 456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
std::numeric_limits::max()+1));
 |
~~~^~
   
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
 warning: integer overflow in expression of type ‘long int’ results in 
‘-9223372036854775808’ [-Woverflow]
 460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
std::numeric_limits::max()+1, std::numeric_limits::max()));
 | 
~~~^~
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576393)
Time Spent: 20m  (was: 10m)

> Fix integer overflow
> 
>
> Key: HDFS-15949
> URL: https://issues.apache.org/jira/browse/HDFS-15949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are some instances where integer overflow warnings are reported. Need 
> to fix them.
> {code}
> [ 63%] Building CXX object 
> main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
> In file included from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
>  from 
> 

[jira] [Work logged] (HDFS-15949) Fix integer overflow

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15949?focusedWorklogId=576392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-576392
 ]

ASF GitHub Bot logged work on HDFS-15949:
-

Author: ASF GitHub Bot
Created on: 03/Apr/21 06:17
Start Date: 03/Apr/21 06:17
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request #2857:
URL: https://github.com/apache/hadoop/pull/2857


   * There are some instance where
 the overflow is deliberately
 done in order to get the lower
 bound. This results in an
 overflow warning.
   * We fix this by using the min
 value directly.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 576392)
Remaining Estimate: 0h
Time Spent: 10m

> Fix integer overflow
> 
>
> Key: HDFS-15949
> URL: https://issues.apache.org/jira/browse/HDFS-15949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are some instances where integer overflow warnings are reported. Need 
> to fix them.
> {code}
> [ 63%] Building CXX object 
> main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
> In file included from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
>  In member function ‘virtual void 
> hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
> std::numeric_limits::max()+1));
>   |
> ~~~^~
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
> std::numeric_limits::max()+1, std::numeric_limits::max()));
>   | 
> ~~~^~
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15949) Fix integer overflow

2021-04-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15949:
--
Labels: pull-request-available  (was: )

> Fix integer overflow
> 
>
> Key: HDFS-15949
> URL: https://issues.apache.org/jira/browse/HDFS-15949
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are some instances where integer overflow warnings are reported. Need 
> to fix them.
> {code}
> [ 63%] Building CXX object 
> main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
> In file included from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
>  from 
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
>  In member function ‘virtual void 
> hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
> std::numeric_limits::max()+1));
>   |
> ~~~^~
> /mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
>  warning: integer overflow in expression of type ‘long int’ results in 
> ‘-9223372036854775808’ [-Woverflow]
>   460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
> std::numeric_limits::max()+1, std::numeric_limits::max()));
>   | 
> ~~~^~
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15949) Fix integer overflow

2021-04-03 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15949:
-

 Summary: Fix integer overflow
 Key: HDFS-15949
 URL: https://issues.apache.org/jira/browse/HDFS-15949
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


There are some instances where integer overflow warnings are reported. Need to 
fix them.

{code}
[ 63%] Building CXX object 
main/native/libhdfspp/tests/CMakeFiles/hdfs_ext_hdfspp_test_shim_static.dir/hdfs_ext_test.cc.o
In file included from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googletest/include/gtest/gtest.h:375,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/internal/gmock-internal-utils.h:47,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock-actions.h:51,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/out/build/WSL-GCC-Debug/main/native/libhdfspp/googletest-src/googlemock/include/gmock/gmock.h:59,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfspp_mini_dfs.h:24,
 from 
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:19:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:
 In member function ‘virtual void hdfs::HdfsExtTest_TestHosts_Test::TestBody()’:
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:456:95:
 warning: integer overflow in expression of type ‘long int’ results in 
‘-9223372036854775808’ [-Woverflow]
  456 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 0, 
std::numeric_limits::max()+1));
  |
~~~^~
/mnt/d/projects/apache/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_ext_test.cc:460:92:
 warning: integer overflow in expression of type ‘long int’ results in 
‘-9223372036854775808’ [-Woverflow]
  460 |   EXPECT_EQ(nullptr, hdfsGetHosts(fs, filename.c_str(), 
std::numeric_limits::max()+1, std::numeric_limits::max()));
  | 
~~~^~
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org