[jira] [Commented] (HDFS-14704) RBF: NnId should not be null in NamenodeHeartbeatService

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901744#comment-16901744
 ] 

Hadoop QA commented on HDFS-14704:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 15s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup |
|   | hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14704 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976888/HDFS-14704-trunk-002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5d0e767ccb14 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9cd211a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27428/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27428/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14705) Remove unused configuration dfs.min.replication

2019-08-06 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901737#comment-16901737
 ] 

CR Hota commented on HDFS-14705:


[~jojochuang] Thanks for the review. Should we commit this?

> Remove unused configuration dfs.min.replication
> ---
>
> Key: HDFS-14705
> URL: https://issues.apache.org/jira/browse/HDFS-14705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: CR Hota
>Priority: Trivial
> Attachments: HDFS-14705.001.patch
>
>
> A few HDFS tests sets a configuration property dfs.min.replication. This is 
> not being used anywhere in the code. It doesn't seem like a leftover from 
> legacy code either. Better to clean them out. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-06 Thread Lisheng Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901736#comment-16901736
 ] 

Lisheng Sun commented on HDFS-14313:


Thanx [~linyiqun] for all your work for this patch. I will attach the patch for 
branch-3.x and branch-2.x branches later. 

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch, HDFS-14313.012.patch, HDFS-14313.013.patch, 
> HDFS-14313.014.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290229=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290229
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311368897
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290225=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290225
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311368484
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290228=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290228
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311374276
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/tasks/TestFileSizeCountTask.java
 ##
 @@ -0,0 +1,129 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.TypedTable;
+import org.junit.Test;
+
+import org.junit.runner.RunWith;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.io.IOException;
+
+import static org.junit.Assert.assertEquals;
+
+import static org.mockito.ArgumentMatchers.anyLong;
+import static org.mockito.BDDMockito.given;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.times;
+import static org.powermock.api.mockito.PowerMockito.mock;
+import static org.powermock.api.mockito.PowerMockito.when;
+
+/**
+ * Unit test for Container Key mapper task.
+ */
+@RunWith(PowerMockRunner.class)
+@PowerMockIgnore({"javax.management.*", "javax.net.ssl.*"})
+@PrepareForTest(OmKeyInfo.class)
+
+public class TestFileSizeCountTask {
+  @Test
+  public void testCalculateBinIndex() {
+FileSizeCountTask fileSizeCountTask = mock(FileSizeCountTask.class);
+
+when(fileSizeCountTask.getMaxFileSizeUpperBound()).
+thenReturn(1125899906842624L);// 1 PB
+when(fileSizeCountTask.getOneKB()).thenReturn(1024L);
+when(fileSizeCountTask.getMaxBinSize()).thenReturn(42);
+when(fileSizeCountTask.calculateBinIndex(anyLong())).thenCallRealMethod();
+
+long fileSize = 1024L;// 1 KB
+int binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(1, binIndex);
+
+fileSize = 1023L;
+binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(0, binIndex);
+
+fileSize = 562949953421312L;  // 512 TB
+binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(40, binIndex);
+
+fileSize = 562949953421313L;  // (512 TB + 1B)
+binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(40, binIndex);
+
+fileSize = 562949953421311L;  // (512 TB - 1B)
+binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(39, binIndex);
+
+fileSize = 1125899906842624L;  // 1 PB - last (extra) bin
+binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(41, binIndex);
+
+fileSize = 10L;
+binIndex = fileSizeCountTask.calculateBinIndex(fileSize);
+assertEquals(7, binIndex);
+
+fileSize = 1125899906842623L;
 
 Review comment:
   I suppose this is 1 PB - 1B. Can you add a comment for this one and the 
previous one as well?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290228)
Time Spent: 8h 10m  (was: 8h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290224=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290224
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311366032
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/UtilizationSchemaDefinition.java
 ##
 @@ -65,5 +69,12 @@ void createClusterGrowthTable(Connection conn) {
 .execute();
   }
 
-
+  void createFileSizeCount(Connection conn) {
+DSL.using(conn).createTableIfNotExists(FILE_COUNT_BY_SIZE_TABLE_NAME)
+.column("file_size_kb", SQLDataType.BIGINT)
 
 Review comment:
   Aren't we storing file size in bytes? Can we change this to just file_size?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290224)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290226=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290226
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311369007
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290223=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290223
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311369498
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290221=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290221
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311371872
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import org.apache.hadoop.ozone.recon.ReconUtils;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.mockito.Mock;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+import static org.powermock.api.mockito.PowerMockito.mock;
+import static org.powermock.api.mockito.PowerMockito.when;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
+/**
+ * Test for File size count service.
+ */
+@RunWith(PowerMockRunner.class)
+@PowerMockIgnore({"javax.management.*", "javax.net.ssl.*"})
+@PrepareForTest(ReconUtils.class)
+public class TestUtilizationService {
+  private UtilizationService utilizationService;
+  @Mock private FileCountBySizeDao fileCountBySizeDao;
+  private List resultList = new ArrayList<>();
+  private int oneKb = 1024;
+  private int maxBinSize = 41;
+
+  public void setUpResultList() {
+for(int i = 0; i < 41; i++){
+  resultList.add(new FileCountBySize((long) Math.pow(2, (10+i)), (long) 
i));
+}
+  }
+
+  @Test
+  public void testGetFileCounts() throws IOException {
+setUpResultList();
+
+utilizationService = mock(UtilizationService.class);
+when(utilizationService.getFileCounts()).thenCallRealMethod();
+when(utilizationService.getDao()).thenReturn(fileCountBySizeDao);
+when(fileCountBySizeDao.findAll()).thenReturn(resultList);
+
+utilizationService.getFileCounts();
+verify(utilizationService, times(1)).getFileCounts();
+verify(fileCountBySizeDao, times(1)).findAll();
+
+assertEquals(41, resultList.size());
+long fileSize = 4096L;
+int index =  findIndex(fileSize);
+long count = resultList.get(index).getCount();
+assertEquals(index, count);
+
+fileSize = 1125899906842624L;
+index = findIndex(fileSize);
+if (index == Integer.MIN_VALUE) {
 
 Review comment:
   This is not required
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290221)
Time Spent: 7.5h  (was: 7h 20m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290222=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290222
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311372538
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/test/java/org/apache/hadoop/ozone/recon/api/TestUtilizationService.java
 ##
 @@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.api;
+
+import org.apache.hadoop.ozone.recon.ReconUtils;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.mockito.Mock;
+import org.powermock.core.classloader.annotations.PowerMockIgnore;
+import org.powermock.core.classloader.annotations.PrepareForTest;
+import org.powermock.modules.junit4.PowerMockRunner;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+import static org.powermock.api.mockito.PowerMockito.mock;
+import static org.powermock.api.mockito.PowerMockito.when;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+
+/**
+ * Test for File size count service.
+ */
+@RunWith(PowerMockRunner.class)
+@PowerMockIgnore({"javax.management.*", "javax.net.ssl.*"})
+@PrepareForTest(ReconUtils.class)
+public class TestUtilizationService {
+  private UtilizationService utilizationService;
+  @Mock private FileCountBySizeDao fileCountBySizeDao;
+  private List resultList = new ArrayList<>();
+  private int oneKb = 1024;
+  private int maxBinSize = 41;
+
+  public void setUpResultList() {
+for(int i = 0; i < 41; i++){
+  resultList.add(new FileCountBySize((long) Math.pow(2, (10+i)), (long) 
i));
+}
+  }
+
+  @Test
+  public void testGetFileCounts() throws IOException {
+setUpResultList();
+
+utilizationService = mock(UtilizationService.class);
+when(utilizationService.getFileCounts()).thenCallRealMethod();
+when(utilizationService.getDao()).thenReturn(fileCountBySizeDao);
+when(fileCountBySizeDao.findAll()).thenReturn(resultList);
+
+utilizationService.getFileCounts();
+verify(utilizationService, times(1)).getFileCounts();
+verify(fileCountBySizeDao, times(1)).findAll();
+
+assertEquals(41, resultList.size());
+long fileSize = 4096L;
+int index =  findIndex(fileSize);
+long count = resultList.get(index).getCount();
+assertEquals(index, count);
+
+fileSize = 1125899906842624L;
+index = findIndex(fileSize);
+if (index == Integer.MIN_VALUE) {
+  throw new IOException("File Size larger than permissible file size");
+}
+
+fileSize = 1025L;
+index = findIndex(fileSize);
+count = resultList.get(index).getCount();
+assertEquals(index, count);
+
+fileSize = 25L;
+index = findIndex(fileSize);
+count = resultList.get(index).getCount();
+assertEquals(index, count);
+  }
+
+  public int findIndex(long dataSize) {
+int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
+if (logValue < 10) {
+  return 0;
+} else {
+  int index = logValue - 10;
+  if (index > maxBinSize) {
+return Integer.MIN_VALUE;
 
 Review comment:
   This needs to be updated.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290222)
Time Spent: 7h 40m  (was: 7.5h)

> Add 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290227=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290227
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 05:33
Start Date: 07/Aug/19 05:33
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311371119
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -0,0 +1,241 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.utils.db.Table;
+import org.apache.hadoop.utils.db.TableIterator;
+import org.hadoop.ozone.recon.schema.tables.daos.FileCountBySizeDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.FileCountBySize;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+
+/**
+ * Class to iterate over the OM DB and store the counts of existing/new
+ * files binned into ranges (1KB, 2Kb..,4MB,.., 1TB,..1PB) to the Recon
+ * fileSize DB.
+ */
+public class FileSizeCountTask extends ReconDBUpdateTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(FileSizeCountTask.class);
+
+  private int maxBinSize = -1;
+  private long maxFileSizeUpperBound = 1125899906842624L; // 1 PB
+  private long[] upperBoundCount;
+  private long oneKb = 1024L;
+  private Collection tables = new ArrayList<>();
+  private FileCountBySizeDao fileCountBySizeDao;
+
+  @Inject
+  public FileSizeCountTask(OMMetadataManager omMetadataManager,
+  Configuration sqlConfiguration) {
+super("FileSizeCountTask");
+try {
+  tables.add(omMetadataManager.getKeyTable().getName());
+  fileCountBySizeDao = new FileCountBySizeDao(sqlConfiguration);
+} catch (Exception e) {
+  LOG.error("Unable to fetch Key Table updates ", e);
+}
+upperBoundCount = new long[getMaxBinSize()];
+  }
+
+  protected long getOneKB() {
+return oneKb;
+  }
+
+  protected long getMaxFileSizeUpperBound() {
+return maxFileSizeUpperBound;
+  }
+
+  protected int getMaxBinSize() {
+if (maxBinSize == -1) {
+  // extra bin to add files > 1PB.
+  maxBinSize = calculateBinIndex(maxFileSizeUpperBound) + 1;
+}
+return maxBinSize;
+  }
+
+  /**
+   * Read the Keys from OM snapshot DB and calculate the upper bound of
+   * File Size it belongs to.
+   *
+   * @param omMetadataManager OM Metadata instance.
+   * @return Pair
+   */
+  @Override
+  public Pair reprocess(OMMetadataManager omMetadataManager) {
+LOG.info("Starting a 'reprocess' run of FileSizeCountTask.");
+Table omKeyInfoTable = omMetadataManager.getKeyTable();
+try (TableIterator>
+keyIter = omKeyInfoTable.iterator()) {
+  while (keyIter.hasNext()) {
+Table.KeyValue kv = keyIter.next();
+countFileSize(kv.getValue());
+  }
+} catch (IOException ioEx) {
+  LOG.error("Unable to populate File Size Count in Recon DB. ", ioEx);
+  return new ImmutablePair<>(getTaskName(), false);
+}
+populateFileCountBySizeDB();
+
+LOG.info("Completed a 'reprocess' run of FileSizeCountTask.");
+return new ImmutablePair<>(getTaskName(), true);
+  }
+
+  @Override
+  protected Collection getTaskTables() {
+return tables;
+  }
+
+  void updateCountFromDB() {
+// Read - Write operations to DB are in ascending order
+// of file size upper bounds.
+List resultSet = fileCountBySizeDao.findAll();
+int index = 0;
+if 

[jira] [Commented] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901687#comment-16901687
 ] 

Hudson commented on HDDS-1921:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17056 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17056/])
HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky (#1238) (bharat: 
rev 9cd211ac86bb1124bdee572fddb6f86655b19b73)
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java


> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1900) Remove UpdateBucket handler which supports add/remove Acl

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1900?focusedWorklogId=290211=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290211
 ]

ASF GitHub Bot logged work on HDDS-1900:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:30
Start Date: 07/Aug/19 04:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1219: 
HDDS-1900. Remove UpdateBucket handler which supports add/remove Acl.
URL: https://github.com/apache/hadoop/pull/1219#discussion_r311364611
 
 

 ##
 File path: hadoop-hdds/docs/content/shell/BucketCommands.md
 ##
 @@ -26,7 +26,6 @@ Ozone shell supports the following bucket commands.
   * [delete](#delete)
   * [info](#info)
   * [list](#list)
-  * [update](#update)
 
 Review comment:
   https://issues.apache.org/jira/browse/HDDS-1913
   Will address this and also fixing Bucket and RpcClient API's.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290211)
Time Spent: 2h  (was: 1h 50m)

> Remove UpdateBucket handler which supports add/remove Acl
> -
>
> Key: HDDS-1900
> URL: https://issues.apache.org/jira/browse/HDDS-1900
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This Jira is to remove bucket update handler.
> To add acl/remove acl we should use ozone sh bucket addacl/ozone sh bucket 
> removeacl.
>  
> Otherwise, when security is enabled, old Bucket update handler, uses 
> setBucketProperty and that checks acl acces for WRITE, whereas when 
> add/remove Acl we should check access for WRITE_ACL.
>  
> If we have both ways, even if a USER does not have WRITE_ACL can still 
> add/remove Acls on a bucket.
>  
> This Jira is to clean up the old code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1913) Fix OzoneBucket and RpcClient APIS for acl

2019-08-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1913:
-
Description: 
Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
addAcl/removeAcl as part of HDDS-1739.

Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
addAcl/removeAcl.

 

And also fix @xiaoyu comment on HDDS-1900 jira. 
BucketManagerImpl#setBucketProperty() as they now require a different 
permission (WRITE_ACL instead of WRITE)?

  was:
Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
addAcl/removeAcl as part of HDDS-1739.

Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
addAcl/removeAcl.


> Fix OzoneBucket and RpcClient APIS for acl
> --
>
> Key: HDDS-1913
> URL: https://issues.apache.org/jira/browse/HDDS-1913
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Fix addAcl,removeAcl in OzoneBucket to use newly added acl API's 
> addAcl/removeAcl as part of HDDS-1739.
> Remove addBucketAcls, removeBucketAcls from RpcClient. We should use 
> addAcl/removeAcl.
>  
> And also fix @xiaoyu comment on HDDS-1900 jira. 
> BucketManagerImpl#setBucketProperty() as they now require a different 
> permission (WRITE_ACL instead of WRITE)?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290209=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290209
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:25
Start Date: 07/Aug/19 04:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1238: HDDS-1921. 
TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518932392
 
 
   Thank You @adoroszlai for the fix.
   I will commit this to the trunk and 0.4 branches.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290209)
Time Spent: 1h 40m  (was: 1.5h)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1918:
-
Fix Version/s: 0.4.1

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1921:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.1
   Status: Resolved  (was: Patch Available)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14704) RBF: NnId should not be null in NamenodeHeartbeatService

2019-08-06 Thread xuzq (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuzq updated HDFS-14704:

Attachment: HDFS-14704-trunk-002.patch

> RBF: NnId should not be null in NamenodeHeartbeatService
> 
>
> Key: HDFS-14704
> URL: https://issues.apache.org/jira/browse/HDFS-14704
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14704-trunk-001.patch, HDFS-14704-trunk-002.patch
>
>
> NnId should not be null in NamenodeHeartbeatService.
> If NnId is null, it will also print the error message like:
> {code:java}
> 2019-08-06 10:38:07,455 ERROR router.NamenodeHeartbeatService 
> (NamenodeHeartbeatService.java:updateState(229)) - Unhandled exception 
> updating NN registration for ns1:null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos$NamenodeMembershipRecordProto$Builder.setServiceAddress(HdfsServerFederationProtos.java:3831)
> at 
> org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.MembershipStatePBImpl.setServiceAddress(MembershipStatePBImpl.java:119)
> at 
> org.apache.hadoop.hdfs.server.federation.store.records.MembershipState.newInstance(MembershipState.java:108)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.registerNamenode(MembershipNamenodeResolver.java:267)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:223)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
> at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901685#comment-16901685
 ] 

Hadoop QA commented on HDFS-14674:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976880/HDFS-14674-005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5b56c7345337 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 38e6968 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27425/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27425/testReport/ |
| Max. process+thread count | 2873 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27425/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290207=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290207
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:14
Start Date: 07/Aug/19 04:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1238: 
HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290207)
Time Spent: 1.5h  (was: 1h 20m)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290206
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:14
Start Date: 07/Aug/19 04:14
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1238: HDDS-1921. 
TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518932392
 
 
   I will commit this to the trunk and 0.4 branch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290206)
Time Spent: 1h 20m  (was: 1h 10m)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290204=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290204
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:11
Start Date: 07/Aug/19 04:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1238: 
HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238#discussion_r311362101
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -345,21 +345,23 @@ public void testDoubleBuffer(int iterations, int 
bucketCount)
   }
 
   // We are doing +1 for volume transaction.
-  GenericTestUtils.waitFor(() ->
-  doubleBuffer.getFlushedTransactionCount() ==
-  (bucketCount + 1) * iterations, 100,
-  12);
+  long expectedTransactions = (bucketCount + 1) * iterations;
+  GenericTestUtils.waitFor(() -> lastAppliedIndex == expectedTransactions,
+  100, 12);
 
-  Assert.assertTrue(omMetadataManager.countRowsInTable(
-  omMetadataManager.getVolumeTable()) == iterations);
+  Assert.assertEquals(expectedTransactions,
+  doubleBuffer.getFlushedTransactionCount()
+  );
 
-  Assert.assertTrue(omMetadataManager.countRowsInTable(
-  omMetadataManager.getBucketTable()) == (bucketCount) * iterations);
+  Assert.assertEquals(iterations,
+  
omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable())
+  );
 
-  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+  Assert.assertEquals(bucketCount * iterations,
+  
omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable())
+  );
 
-  // Check lastAppliedIndex is updated correctly or not.
-  Assert.assertEquals((bucketCount + 1) * iterations, lastAppliedIndex);
 
 Review comment:
   Yes, you are right, I have missed it. Thanks for the pointer.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290204)
Time Spent: 1h 10m  (was: 1h)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=290203=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290203
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:07
Start Date: 07/Aug/19 04:07
Worklog Time Spent: 10m 
  Work Description: chenjunjiedada commented on issue #1242: HDDS-1553: Add 
metric for rack aware placement policy
URL: https://github.com/apache/hadoop/pull/1242#issuecomment-518931264
 
 
   @ChenSammi , could you please take a look? Does it satisfy your requirement? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290203)
Time Spent: 40m  (was: 0.5h)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=290202=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290202
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:04
Start Date: 07/Aug/19 04:04
Worklog Time Spent: 10m 
  Work Description: chenjunjiedada commented on pull request #1242: 
HDDS-1553: Add metric for rack aware placement policy
URL: https://github.com/apache/hadoop/pull/1242
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290202)
Time Spent: 0.5h  (was: 20m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=290201=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290201
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:01
Start Date: 07/Aug/19 04:01
Worklog Time Spent: 10m 
  Work Description: chenjunjiedada commented on pull request #1241: 
HDDS-1553: Add metric for rack aware placement policy
URL: https://github.com/apache/hadoop/pull/1241
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290201)
Time Spent: 20m  (was: 10m)

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1553:
-
Labels: pull-request-available  (was: )

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1553) Add metrics in rack aware container placement policy

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1553?focusedWorklogId=290199=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290199
 ]

ASF GitHub Bot logged work on HDDS-1553:


Author: ASF GitHub Bot
Created on: 07/Aug/19 04:00
Start Date: 07/Aug/19 04:00
Worklog Time Spent: 10m 
  Work Description: chenjunjiedada commented on pull request #1241: 
HDDS-1553: Add metric for rack aware placement policy
URL: https://github.com/apache/hadoop/pull/1241
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290199)
Time Spent: 10m
Remaining Estimate: 0h

> Add metrics in rack aware container placement policy
> 
>
> Key: HDDS-1553
> URL: https://issues.apache.org/jira/browse/HDDS-1553
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Junjie Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> To collect following statistics, 
> 1. total requested datanode count (A)
> 2. success allocated datanode count without constrain compromise (B)
> 3. success allocated datanode count with some comstrain compromise (C)
> B includes C, failed allocation = (A - B)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1894) Support listPipelines by filters in scmcli

2019-08-06 Thread Junjie Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junjie Chen reassigned HDDS-1894:
-

Assignee: Li Cheng  (was: Junjie Chen)

Hi Timmy

Could you please help to take a look at  this please, I have no time recently. 

> Support listPipelines by filters in scmcli
> --
>
> Key: HDDS-1894
> URL: https://issues.apache.org/jira/browse/HDDS-1894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Li Cheng
>Priority: Major
>
> Today scmcli has a subcmd that allow list all pipelines. This ticket is 
> opened to filter the results by switches, e.g., filter by Factor: THREE and 
> State: OPEN. This will be useful for trouble shooting in large cluster.
>  
> {code}
> bin/ozone scmcli listPipelines
> Pipeline[ Id: a8d1b0c9-e1d4-49ea-8746-3f61dfb5ee3f, Nodes: 
> cce44fde-bc8d-4063-97b3-6f557af756e1\{ip: 10.17.112.65, host: 
> ia0230.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:ONE, State:OPEN]
> Pipeline[ Id: c9c453d1-d74c-4414-b87f-1d3585d78a7c, Nodes: 
> 0b7b0b93-8323-4b82-8cc0-a9a5c10ab827\{ip: 10.17.112.29, host: 
> ia0138.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}c756a0e0-5a1b-4d03-ba5b-cafbcabac877\{ip: 10.17.112.27, host: 
> ia0134.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}bee45bd7-1ee6-4726-b3d1-81476dc1eb49\{ip: 10.17.112.28, host: 
> ia0136.halxg.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, Type:RATIS, Factor:THREE, State:OPEN]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290196=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290196
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 07/Aug/19 03:51
Start Date: 07/Aug/19 03:51
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #1238: HDDS-1921. 
TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238#discussion_r311359109
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -345,21 +345,23 @@ public void testDoubleBuffer(int iterations, int 
bucketCount)
   }
 
   // We are doing +1 for volume transaction.
-  GenericTestUtils.waitFor(() ->
-  doubleBuffer.getFlushedTransactionCount() ==
-  (bucketCount + 1) * iterations, 100,
-  12);
+  long expectedTransactions = (bucketCount + 1) * iterations;
+  GenericTestUtils.waitFor(() -> lastAppliedIndex == expectedTransactions,
+  100, 12);
 
-  Assert.assertTrue(omMetadataManager.countRowsInTable(
-  omMetadataManager.getVolumeTable()) == iterations);
+  Assert.assertEquals(expectedTransactions,
+  doubleBuffer.getFlushedTransactionCount()
+  );
 
-  Assert.assertTrue(omMetadataManager.countRowsInTable(
-  omMetadataManager.getBucketTable()) == (bucketCount) * iterations);
+  Assert.assertEquals(iterations,
+  
omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable())
+  );
 
-  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+  Assert.assertEquals(bucketCount * iterations,
+  
omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable())
+  );
 
-  // Check lastAppliedIndex is updated correctly or not.
-  Assert.assertEquals((bucketCount + 1) * iterations, lastAppliedIndex);
 
 Review comment:
   The waitFor() moved above should have guarantee this condition already. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290196)
Time Spent: 1h  (was: 50m)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14662) Document the usage of the new Balancer "asService" parameter

2019-08-06 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901660#comment-16901660
 ] 

Ayush Saxena commented on HDFS-14662:
-

Apart from the whitespace issue. LGTM

> Document the usage of the new Balancer "asService" parameter
> 
>
> Key: HDFS-14662
> URL: https://issues.apache.org/jira/browse/HDFS-14662
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14662.001.patch, HDFS-14662.002.patch
>
>
> see HDFS-13783, this jira add document for how to run balancer as a long 
> service



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14608) DataNode$DataTransfer should be named

2019-08-06 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901653#comment-16901653
 ] 

Ayush Saxena commented on HDFS-14608:
-

Well all seems to be unrelated, But I have retriggered build, No harm being 
double sure.

> DataNode$DataTransfer should be named
> -
>
> Key: HDFS-14608
> URL: https://issues.apache.org/jira/browse/HDFS-14608
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-14608.000.patch, HDFS-14608.001.patch
>
>
> Currently, the {{DataTransfer}} thread has no name and it just outputs the 
> default {{toString()}}.
> This shows in the logs in jstack as something like:
> {code}
> 2019-06-25 11:01:01,211 INFO 
> [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@609ed67a] 
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at 
> CO4AEAPC1AF:10010: Transmitted 
> BP-1191059133-10.1.2.3-145702348:blk_1113379522_69745835 
> (numBytes=485214) to 10.1.2.3/10.1.2.3:10010
> {code}
> As this uses the {{Daemon}} class, the name is set based on:
> {code}
>   public Daemon(Runnable runnable) {
> super(runnable);
> this.runnable = runnable;
> this.setName(((Object)runnable).toString());
>   }
> {code}
> We should implement toString to at least have the name of the block being 
> transfferred or something similar to what DataXceiver does (e.g., HDFS-3375).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14704) RBF: NnId should not be null in NamenodeHeartbeatService

2019-08-06 Thread xuzq (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901654#comment-16901654
 ] 

xuzq commented on HDFS-14704:
-

[~elgoiri] [~crh]  Thanks for the comment.
{quote}The reason for allowing nn id to be null is because we had setups with 
multiple subclusters (nameservices) but not HA.
In that case, there is not nn identifier.
{quote}
Maybe NsId and NnId can be null in NamenodeHeartbeatService,  but 
ServiceAddress should not be null.

If ServiceAddress is null or invalid, it will be always throw SomeException in 
UpdateState.

 

> RBF: NnId should not be null in NamenodeHeartbeatService
> 
>
> Key: HDFS-14704
> URL: https://issues.apache.org/jira/browse/HDFS-14704
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: xuzq
>Assignee: xuzq
>Priority: Major
> Attachments: HDFS-14704-trunk-001.patch
>
>
> NnId should not be null in NamenodeHeartbeatService.
> If NnId is null, it will also print the error message like:
> {code:java}
> 2019-08-06 10:38:07,455 ERROR router.NamenodeHeartbeatService 
> (NamenodeHeartbeatService.java:updateState(229)) - Unhandled exception 
> updating NN registration for ns1:null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos$NamenodeMembershipRecordProto$Builder.setServiceAddress(HdfsServerFederationProtos.java:3831)
> at 
> org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.MembershipStatePBImpl.setServiceAddress(MembershipStatePBImpl.java:119)
> at 
> org.apache.hadoop.hdfs.server.federation.store.records.MembershipState.newInstance(MembershipState.java:108)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.registerNamenode(MembershipNamenodeResolver.java:267)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:223)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
> at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14616) Add the warn log when the volume available space isn't enough

2019-08-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901650#comment-16901650
 ] 

Wei-Chiu Chuang commented on HDFS-14616:


+1 from me

> Add the warn log when the volume available space isn't enough
> -
>
> Key: HDFS-14616
> URL: https://issues.apache.org/jira/browse/HDFS-14616
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: liying
>Assignee: liying
>Priority: Minor
> Attachments: HDFS-14616.001.patch, HDFS-14616.002.patch, 
> HDFS-14616.003.patch
>
>
> In the hadoop2 version, there is no warning log that the disk is not 
> available when using the disk. Therefore, the datanode log cannot be used to 
> check if the disk is not available ata certain time or for other problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14616) Add the warn log when the volume available space isn't enough

2019-08-06 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901649#comment-16901649
 ] 

Ayush Saxena commented on HDFS-14616:
-

Thanx [~alexking_lee] for the patch. v003 LGTM
[~jojochuang] any further comments? 

> Add the warn log when the volume available space isn't enough
> -
>
> Key: HDFS-14616
> URL: https://issues.apache.org/jira/browse/HDFS-14616
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.7.2
>Reporter: liying
>Assignee: liying
>Priority: Minor
> Attachments: HDFS-14616.001.patch, HDFS-14616.002.patch, 
> HDFS-14616.003.patch
>
>
> In the hadoop2 version, there is no warning log that the disk is not 
> available when using the disk. Therefore, the datanode log cannot be used to 
> check if the disk is not available ata certain time or for other problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14662) Document the usage of the new Balancer "asService" parameter

2019-08-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901648#comment-16901648
 ] 

Wei-Chiu Chuang commented on HDFS-14662:


LGTM. Any one else like to review before I commit?

> Document the usage of the new Balancer "asService" parameter
> 
>
> Key: HDFS-14662
> URL: https://issues.apache.org/jira/browse/HDFS-14662
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14662.001.patch, HDFS-14662.002.patch
>
>
> see HDFS-13783, this jira add document for how to run balancer as a long 
> service



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14370) Edit log tailing fast-path should allow for backoff

2019-08-06 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901646#comment-16901646
 ] 

Ayush Saxena commented on HDFS-14370:
-

Thanx [~xkrogen] for the patch.

{code:java}
+  maxSleepTimeMsTemp, DFSConfigKeys.DFS_HA_TAILEDITS_PERIOD_KEY);
+  maxSleepTimeMs = -1;
{code}

Here also we should set 0, to make it look consistent with other parts of code.


> Edit log tailing fast-path should allow for backoff
> ---
>
> Key: HDFS-14370
> URL: https://issues.apache.org/jira/browse/HDFS-14370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode, qjm
>Affects Versions: 3.3.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14370.000.patch, HDFS-14370.001.patch, 
> HDFS-14370.002.patch, HDFS-14370.003.patch, HDFS-14370.004.patch
>
>
> As part of HDFS-13150, in-progress edit log tailing was changed to use an 
> RPC-based mechanism, thus allowing the edit log tailing frequency to be 
> turned way down, and allowing standby/observer NameNodes to be only a few 
> milliseconds stale as compared to the Active NameNode.
> When there is a high volume of transactions on the system, each RPC fetches 
> transactions and takes some time to process them, self-rate-limiting how 
> frequently an RPC is submitted. In a lightly loaded cluster, however, most of 
> these RPCs return an empty set of transactions, consuming a high 
> (de)serialization overhead for very little benefit. This was reported by 
> [~jojochuang] in HDFS-14276 and I have also seen it on a test cluster where 
> the SbNN was submitting 8000 RPCs per second that returned empty.
> I propose we add some sort of backoff to the tailing, so that if an empty 
> response is received, it will wait a longer period of time before submitting 
> a new RPC.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14476) lock too long when fix inconsistent blocks between disk and in-memory

2019-08-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901645#comment-16901645
 ] 

Wei-Chiu Chuang commented on HDFS-14476:


[~seanlook] please follow the How to contribute wiki 
[https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute] and name 
the patch file according to the branch where it applies. You will also need to 
submit the patch to kick off precommit check. Thank you.

> lock too long when fix inconsistent blocks between disk and in-memory
> -
>
> Key: HDFS-14476
> URL: https://issues.apache.org/jira/browse/HDFS-14476
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Attachments: HDFS-14476.00.patch, datanode-with-patch-14476.png
>
>
> When directoryScanner have the results of differences between disk and 
> in-memory blocks. it will try to run {{checkAndUpdate}} to fix it. However 
> {{FsDatasetImpl.checkAndUpdate}} is a synchronized call
> As I have about 6millions blocks for every datanodes and every 6hours' scan 
> will have about 25000 abnormal blocks to fix. That leads to a long lock 
> holding FsDatasetImpl object.
> let's assume every block need 10ms to fix(because of latency of SAS disk), 
> that will cost 250 seconds to finish. That means all reads and writes will be 
> blocked for 3mins for that datanode.
>  
> {code:java}
> 2019-05-06 08:06:51,704 INFO 
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
> BP-1644920766-10.223.143.220-1450099987967 Total blocks: 6850197, missing 
> metadata files:23574, missing block files:23574, missing blocks in 
> memory:47625, mismatched blocks:0
> ...
> 2019-05-06 08:16:41,625 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Took 588402ms to process 1 commands from NN
> {code}
> Take long time to process command from nn because threads are blocked. And 
> namenode will see long lastContact time for this datanode.
> Maybe this affect all hdfs versions.
> *how to fix:*
> just like process invalidate command from namenode with 1000 batch size, fix 
> these abnormal block should be handled with batch too and sleep 2 seconds 
> between the batch to allow normal reading/writing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1740) Handle Failure to Update Ozone Container YAML

2019-08-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1740:
---

Assignee: Supratim Deka

> Handle Failure to Update Ozone Container YAML
> -
>
> Key: HDDS-1740
> URL: https://issues.apache.org/jira/browse/HDDS-1740
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Ensure consistent state in-memory and in the persistent YAML file for the 
> Container.
> If an update to the YAML fails, then the in-memory state also does not change.
> This ensures that in every container report, the SCM continues to see the 
> specific container is still in the old state. And this triggers a retry of 
> the state change operation from the SCM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1739) Handle Apply Transaction Failure in State Machine

2019-08-06 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-1739.
-
Resolution: Duplicate

> Handle Apply Transaction Failure in State Machine
> -
>
> Key: HDDS-1739
> URL: https://issues.apache.org/jira/browse/HDDS-1739
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>
> Scope of this jira is to handle failure of applyTransaction() for the 
> Container State Machine.
> 1. Introduce new Replica state - STALE to indicate container is missing 
> transactions. Mark failed container as STALE.
> 2. Trigger immediate ICR to SCM
> 3. Fail new transactions on STALE container
> 4. Notify volume error to the DN (to trigger background volume check)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901639#comment-16901639
 ] 

Hadoop QA commented on HDFS-12914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
23s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:1 |
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.TestSafeMode |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestRollingUpgrade |
| Timed out junit tests | org.apache.hadoop.hdfs.TestPread |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:b93746a0168 |
| JIRA Issue | HDFS-12914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976878/HDFS-12914.branch-2.8.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4a6f8f62fab2 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901635#comment-16901635
 ] 

Hudson commented on HDFS-14313:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17055 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17055/])
HDFS-14313. Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo 
(yqlin: rev a5bb1e8ee871dfff77d0f6921b13c8ffb50e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/FSCachingGetSpaceUsed.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaCachingGetSpaceUsed.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/GetSpaceUsed.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/TestReplicaCachingGetSpaceUsed.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch, HDFS-14313.012.patch, HDFS-14313.013.patch, 
> HDFS-14313.014.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901632#comment-16901632
 ] 

Yiqun Lin edited comment on HDFS-14313 at 8/7/19 2:29 AM:
--

I have committed this to trunk, but find the conflicts when backport to 
branch-3.x and branch-2.x branches. [~leosun08], would you mind attaching the 
patch for those branches? I think this improvement is very helpful and can be 
backported in these versions.


was (Author: linyiqun):
I have committed this to trunk, but find the conflicts when backport to 
branch-3.x and branch-2.x branches. [~leosun08], would you mind attach the 
patch for those branches? I think this improvement is very helpful and can be 
backported in these versions.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch, HDFS-14313.012.patch, HDFS-14313.013.patch, 
> HDFS-14313.014.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=290179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290179
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 07/Aug/19 02:29
Start Date: 07/Aug/19 02:29
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-518913410
 
 
   Thanks @mukul1987 . In ratis, as far  as my understanding goes, before 
taking a snapshot we wait for all the pending applyTrannsaction futures to 
complete and since now with the patch, the applyTransaction exception is being 
propagated to Ratis, ideally snapshot creation will fail in Ratis.
   
   I will add a test case to verify the same.
   I will address the remaining review comments as part of the next patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290179)
Time Spent: 3h 40m  (was: 3.5h)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-06 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-14313:
-
Fix Version/s: 3.3.0

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch, HDFS-14313.012.patch, HDFS-14313.013.patch, 
> HDFS-14313.014.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=290177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290177
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 07/Aug/19 02:26
Start Date: 07/Aug/19 02:26
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#discussion_r311346307
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
 ##
 @@ -609,6 +609,16 @@ void handleNoLeader(RaftGroupId groupId, RoleInfoProto 
roleInfoProto) {
 handlePipelineFailure(groupId, roleInfoProto);
   }
 
+  void handleApplyTransactionFailure(RaftGroupId groupId,
+  RaftProtos.RaftPeerRole role) {
+UUID dnId = RatisHelper.toDatanodeId(getServer().getId());
+String msg =
+"Ratis Transaction failure in datanode" + dnId + " with role " + role
++ " Triggering pipeline close action.";
+triggerPipelineClose(groupId, msg, 
ClosePipelineInfo.Reason.PIPELINE_FAILED,
+false);
+stop();
 
 Review comment:
   As far as i know from previous discussions , the decision was to not take 
any other transactions on this pipeline at all and kill the RaftServerImpl 
instance. Any deviation from that conclusion?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290177)
Time Spent: 3.5h  (was: 3h 20m)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901632#comment-16901632
 ] 

Yiqun Lin commented on HDFS-14313:
--

I have committed this to trunk, but find the conflicts when backport to 
branch-3.x and branch-2.x branches. [~leosun08], would you mind attach the 
patch for those branches? I think this improvement is very helpful and can be 
backported in these versions.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch, HDFS-14313.012.patch, HDFS-14313.013.patch, 
> HDFS-14313.014.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-06 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-14708:
--

Assignee: Lisheng Sun

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in 
> trunk
> 
>
> Key: HDFS-14708
> URL: https://issues.apache.org/jira/browse/HDFS-14708
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> {code:java}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 47.956 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
> Caused by: java.lang.IllegalStateException: 
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
> Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>   at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>   at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>   at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>   at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
>   ... 8 more
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1499)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1396)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
>   at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
>   at 

[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=290176=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290176
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 07/Aug/19 02:24
Start Date: 07/Aug/19 02:24
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-518913410
 
 
   Thanks @mukul1987 . In ratis, as far  as my understanding goes, before 
taking a snapshot we wait for all the pending applyTrannsaction futures to 
complete and since now with the patch, the applyTransaction exception is being 
propagated to Ratis, ideally snapshot creation will fail in Ratis.
   
   I will address the remaining review comments as part of the next patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290176)
Time Spent: 3h 20m  (was: 3h 10m)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1610) applyTransaction failure should not be lost on restart

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1610?focusedWorklogId=290175=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290175
 ]

ASF GitHub Bot logged work on HDDS-1610:


Author: ASF GitHub Bot
Created on: 07/Aug/19 02:24
Start Date: 07/Aug/19 02:24
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1226: HDDS-1610. 
applyTransaction failure should not be lost on restart.
URL: https://github.com/apache/hadoop/pull/1226#issuecomment-518913410
 
 
   Thanks @mukul1987 . In ratis, as fara as my understanding goes, before 
taking a snapshot we wait for all the pending applyTrannsaction futures to 
complete and since now with the patch, the applyTransaction exception is being 
propagated to Ratis, ideally snapshot creation will fail in Ratis.
   
   I will address the remaining review comments as part of the next patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290175)
Time Spent: 3h 10m  (was: 3h)

> applyTransaction failure should not be lost on restart
> --
>
> Key: HDDS-1610
> URL: https://issues.apache.org/jira/browse/HDDS-1610
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> If the applyTransaction fails in the containerStateMachine, then the 
> container should not accept new writes on restart,.
> This can occur if
> # chunk write applyTransaction fails
> # container state update to UNHEALTHY also fails
> # Ratis snapshot is taken
> # Node restarts
> # container accepts new transactions



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14707) Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901631#comment-16901631
 ] 

Hadoop QA commented on HDFS-14707:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:da67579 |
| JIRA Issue | HDFS-14707 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976882/HDFS-14707-branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux f2c657c2efba 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 4a9fc45 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27426/testReport/ |
| Max. process+thread count | 94 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27426/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



>  Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2
> 
>
> Key: HDFS-14707
> URL: https://issues.apache.org/jira/browse/HDFS-14707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14707-branch-2.001.patch
>
>
> Currently HTTPFS does not load hadoop native library since java.library.path 
> is not set on Tomcat startup.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14313) Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory instead of df/du

2019-08-06 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901628#comment-16901628
 ] 

Yiqun Lin commented on HDFS-14313:
--

LGTM, +1. Committing this.

> Get hdfs used space from FsDatasetImpl#volumeMap#ReplicaInfo in memory  
> instead of df/du
> 
>
> Key: HDFS-14313
> URL: https://issues.apache.org/jira/browse/HDFS-14313
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, performance
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 2.9.0, 3.0.0, 3.1.0
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14313.000.patch, HDFS-14313.001.patch, 
> HDFS-14313.002.patch, HDFS-14313.003.patch, HDFS-14313.004.patch, 
> HDFS-14313.005.patch, HDFS-14313.006.patch, HDFS-14313.007.patch, 
> HDFS-14313.008.patch, HDFS-14313.009.patch, HDFS-14313.010.patch, 
> HDFS-14313.011.patch, HDFS-14313.012.patch, HDFS-14313.013.patch, 
> HDFS-14313.014.patch
>
>
> There are two ways of DU/DF getting used space that are insufficient.
>  #  Running DU across lots of disks is very expensive and running all of the 
> processes at the same time creates a noticeable IO spike.
>  #  Running DF is inaccurate when the disk sharing by multiple datanode or 
> other servers.
>  Getting hdfs used space from  FsDatasetImpl#volumeMap#ReplicaInfos in memory 
> is very small and accurate. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14707) Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2

2019-08-06 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-14707:

Status: Patch Available  (was: Open)

>  Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2
> 
>
> Key: HDFS-14707
> URL: https://issues.apache.org/jira/browse/HDFS-14707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14707-branch-2.001.patch
>
>
> Currently HTTPFS does not load hadoop native library since java.library.path 
> is not set on Tomcat startup.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-06 Thread Lisheng Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14708:
---
Description: 
{code:java}
[ERROR] 
testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
  Time elapsed: 47.956 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the 
size limit.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
Caused by: java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the 
size limit.
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message 
was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to 
increase the size limit.
at 
com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at 
com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at 
com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
at 
com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
... 8 more

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1553)
at org.apache.hadoop.ipc.Client.call(Client.java:1499)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy25.blockReport(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:218)
at 
org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 

[jira] [Updated] (HDFS-14707) Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2

2019-08-06 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-14707:

Attachment: HDFS-14707-branch-2.001.patch

>  Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2
> 
>
> Key: HDFS-14707
> URL: https://issues.apache.org/jira/browse/HDFS-14707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
> Attachments: HDFS-14707-branch-2.001.patch
>
>
> Currently HTTPFS does not load hadoop native library since java.library.path 
> is not set on Tomcat startup.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14708) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk

2019-08-06 Thread Lisheng Sun (JIRA)
Lisheng Sun created HDFS-14708:
--

 Summary: 
TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk
 Key: HDFS-14708
 URL: https://issues.apache.org/jira/browse/HDFS-14708
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Lisheng Sun


{code:java}
2019-08-07 09:56:26,082 [IPC Server handler 7 on default port 49613] INFO 
ipc.Server (Server.java:logException(2982)) - IPC Server handler 7 on default 
port 49613, call Call#7 Retry#0 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReport from 
127.0.0.1:49618
java.io.IOException: java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the 
size limit.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5011)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1581)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:181)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:31664)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:529)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1001)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:929)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1891)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2921)
Caused by: java.lang.IllegalStateException: 
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the 
size limit.
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:424)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:396)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiffSorted(BlockManager.java:2952)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2787)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2655)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1582)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5089)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5068)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message 
was too large. May be malicious. Use CodedInputStream.setSizeLimit() to 
increase the size limit.
at 
com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
at 
com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:420)
... 8 more
{code}
Ref :: 
[https://builds.apache.org/job/PreCommit-HDFS-Build/27416/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290156
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 01:55
Start Date: 07/Aug/19 01:55
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311341219
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) {
 LOG.error("Unexpected exception while updating key data : {} {}",
 updatedKey, e.getMessage());
 return new ImmutablePair<>(getTaskName(), false);
-  } finally {
-populateFileCountBySizeDB();
   }
+  populateFileCountBySizeDB();
 }
 LOG.info("Completed a 'process' run of FileSizeCountTask.");
 return new ImmutablePair<>(getTaskName(), true);
   }
 
   /**
* Calculate the bin index based on size of the Key.
+   * index is calculated as the number of right shifts
+   * needed until dataSize becomes zero.
*
* @param dataSize Size of the key.
* @return int bin index in upperBoundCount
*/
-  private int calcBinIndex(long dataSize) {
-if(dataSize >= maxFileSizeUpperBound) {
-  return Integer.MIN_VALUE;
-} else if (dataSize > SIZE_512_TB) {
-  //given the small difference in 512TB and 512TB + 1B, index for both 
would
-  //return same, to differentiate specific condition added.
-  return maxBinSize - 1;
-}
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if(logValue < 10){
-  return 0;
-} else{
-  return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10;
+  int calculateBinIndex(long dataSize) {
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 }
+return index < 10 ? 0 : index - 10;
   }
 
-  private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{
-int index = calcBinIndex(omKeyInfo.getDataSize());
-if(index == Integer.MIN_VALUE) {
-  throw new IOException("File Size larger than permissible file size "
-  + maxFileSizeUpperBound +" bytes");
+  void countFileSize(OmKeyInfo omKeyInfo) {
+int index;
+if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) {
+  index = maxBinSize - 1;
+} else {
+  index = calculateBinIndex(omKeyInfo.getDataSize());
 }
 upperBoundCount[index]++;
   }
 
-  private void populateFileCountBySizeDB() {
+  /**
+   * Populate DB with the counts of file sizes calculated
+   * using the dao.
+   *
+   */
+  void populateFileCountBySizeDB() {
 for (int i = 0; i < upperBoundCount.length; i++) {
   long fileSizeUpperBound = (long) Math.pow(2, (10 + i));
   FileCountBySize fileCountRecord =
   fileCountBySizeDao.findById(fileSizeUpperBound);
   FileCountBySize newRecord = new
   FileCountBySize(fileSizeUpperBound, upperBoundCount[i]);
-  if(fileCountRecord == null){
+  if (fileCountRecord == null) {
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290156)
Time Spent: 7h 20m  (was: 7h 10m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14707) Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2

2019-08-06 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901616#comment-16901616
 ] 

Masatake Iwasaki commented on HDFS-14707:
-

Same as done for KMS in HADOOP-11329.

>  Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2
> 
>
> Key: HDFS-14707
> URL: https://issues.apache.org/jira/browse/HDFS-14707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> Currently HTTPFS does not load hadoop native library since java.library.path 
> is not set on Tomcat startup.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14707) Add JAVA_LIBRARY_PATH to HTTPFS startup options in branch-2

2019-08-06 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-14707:
---

 Summary:  Add JAVA_LIBRARY_PATH to HTTPFS startup options in 
branch-2
 Key: HDFS-14707
 URL: https://issues.apache.org/jira/browse/HDFS-14707
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


Currently HTTPFS does not load hadoop native library since java.library.path is 
not set on Tomcat startup.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290153=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290153
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 07/Aug/19 01:53
Start Date: 07/Aug/19 01:53
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308069
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) {
 LOG.error("Unexpected exception while updating key data : {} {}",
 updatedKey, e.getMessage());
 return new ImmutablePair<>(getTaskName(), false);
-  } finally {
-populateFileCountBySizeDB();
   }
+  populateFileCountBySizeDB();
 }
 LOG.info("Completed a 'process' run of FileSizeCountTask.");
 return new ImmutablePair<>(getTaskName(), true);
   }
 
   /**
* Calculate the bin index based on size of the Key.
+   * index is calculated as the number of right shifts
+   * needed until dataSize becomes zero.
*
* @param dataSize Size of the key.
* @return int bin index in upperBoundCount
*/
-  private int calcBinIndex(long dataSize) {
-if(dataSize >= maxFileSizeUpperBound) {
-  return Integer.MIN_VALUE;
-} else if (dataSize > SIZE_512_TB) {
-  //given the small difference in 512TB and 512TB + 1B, index for both 
would
-  //return same, to differentiate specific condition added.
-  return maxBinSize - 1;
-}
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if(logValue < 10){
-  return 0;
-} else{
-  return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10;
+  int calculateBinIndex(long dataSize) {
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 }
+return index < 10 ? 0 : index - 10;
   }
 
-  private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{
-int index = calcBinIndex(omKeyInfo.getDataSize());
-if(index == Integer.MIN_VALUE) {
-  throw new IOException("File Size larger than permissible file size "
-  + maxFileSizeUpperBound +" bytes");
+  void countFileSize(OmKeyInfo omKeyInfo) {
+int index;
+if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) {
+  index = maxBinSize - 1;
+} else {
+  index = calculateBinIndex(omKeyInfo.getDataSize());
 }
 upperBoundCount[index]++;
   }
 
-  private void populateFileCountBySizeDB() {
+  /**
+   * Populate DB with the counts of file sizes calculated
+   * using the dao.
+   *
+   */
+  void populateFileCountBySizeDB() {
 for (int i = 0; i < upperBoundCount.length; i++) {
   long fileSizeUpperBound = (long) Math.pow(2, (10 + i));
   FileCountBySize fileCountRecord =
   fileCountBySizeDao.findById(fileSizeUpperBound);
   FileCountBySize newRecord = new
   FileCountBySize(fileSizeUpperBound, upperBoundCount[i]);
-  if(fileCountRecord == null){
+  if (fileCountRecord == null) {
 fileCountBySizeDao.insert(newRecord);
-  } else{
+  } else {
 fileCountBySizeDao.update(newRecord);
   }
 }
   }
 
   private void updateUpperBoundCount(OmKeyInfo value, String operation)
   throws IOException {
-int binIndex = calcBinIndex(value.getDataSize());
-if(binIndex == Integer.MIN_VALUE) {
+int binIndex = calculateBinIndex(value.getDataSize());
+if (binIndex == Integer.MIN_VALUE) {
 
 Review comment:
   Yes, it was from a previous check where there was an exception for fileSize 
> permitted value of 1 PB. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290153)
Time Spent: 7h 10m  (was: 7h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Ozone 

[jira] [Commented] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901610#comment-16901610
 ] 

Hadoop QA commented on HDFS-12491:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}139m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  6s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | test_test_libhdfs_ops_hdfs_static |
|   | test_test_libhdfs_threaded_hdfs_static |
|   | test_test_libhdfs_zerocopy_hdfs_static |
|   | test_test_native_mini_dfs |
|   | test_libhdfs_threaded_hdfspp_test_shim_static |
|   | test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static |
|   | libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static |
|   | test_libhdfs_mini_stress_hdfspp_test_shim_static |
|   | test_hdfs_ext_hdfspp_test_shim_static |
| Failed junit tests | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
|   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-12491 |
| JIRA Patch URL | 

[jira] [Resolved] (HDFS-11273) Move TransferFsImage#doGetUrl function to a Util class

2019-08-06 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-11273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-11273.

Resolution: Fixed

> Move TransferFsImage#doGetUrl function to a Util class
> --
>
> Key: HDFS-11273
> URL: https://issues.apache.org/jira/browse/HDFS-11273
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 3.0.0-alpha2
>
> Attachments: HDFS-11273-branch-2.001.patch, 
> HDFS-11273-branch-2.002.patch, HDFS-11273-branch-2.003.patch, 
> HDFS-11273.000.patch, HDFS-11273.001.patch, HDFS-11273.002.patch, 
> HDFS-11273.003.patch, HDFS-11273.004.patch
>
>
> TransferFsImage#doGetUrl downloads files from the specified url and stores 
> them in the specified storage location. HDFS-4025 plans to synchronize the 
> log segments in JournalNodes. If a log segment is missing from a JN, the JN 
> downloads it from another JN which has the required log segment. We need 
> TransferFsImage#doGetUrl and TransferFsImage#receiveFile to accomplish this. 
> So we propose to move the said functions to a Utility class so as to be able 
> to use it for JournalNode syncing as well, without duplication of code.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff to support WebHDFS

2019-08-06 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901605#comment-16901605
 ] 

Xiaoyu Yao commented on HDFS-13916:
---

Thanks [~jojochuang] for the update. The latest patch LGTM overall. Just  a few 
minor comments. I will check the unit tests and post comments on that later. 

DistCpSync.java
Line 75: NIT: " DFS or WebHdfs." should be "HDFS, WebHdfs or SWebHdfs".

Line 98: can we wrap this logic into something like 
checkSnapshotDiffSupported() to avoid duplication and make future change 
easier. 
This check should also be done at the begin of public API DistCpSync#sync() and 
private API checkNoChange() because the else assumes webhdfs.



> Distcp SnapshotDiff to support WebHDFS
> --
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.006.patch, 
> HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: HDFS-13052 to provide the possibility to 
> make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-06 Thread wangzhaohui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhaohui updated HDFS-14674:
---
Attachment: HDFS-14674-005.patch

> [SBN read] Got an unexpected txid when tail editlog
> ---
>
> Key: HDFS-14674
> URL: https://issues.apache.org/jira/browse/HDFS-14674
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: wangzhaohui
>Assignee: wangzhaohui
>Priority: Blocker
> Attachments: HDFS-14674-001.patch, HDFS-14674-003.patch, 
> HDFS-14674-004.patch, HDFS-14674-005.patch, image-2019-07-26-11-34-23-405.png
>
>
> Add the following configuration
> !image-2019-07-26-11-34-23-405.png!
> error:
> {code:java}
> //
> [2019-07-17T11:50:21.048+08:00] [INFO] [Edit log tailer] : replaying edit 
> log: 1/20512836 transactions completed. (0%) [2019-07-17T11:50:21.059+08:00] 
> [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  of size 3126782311 edits # 500 loaded in 3 seconds 
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Reading 
> org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@51ceb7bc 
> expecting start txid #232056752162 [2019-07-17T11:50:21.059+08:00] [INFO] 
> [Edit log tailer] : Start loading edits file 
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH
>  maxTxnipsToRead = 500 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log 
> tailer] : Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?ipjid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.059+08:00] [INFO] [Edit 
> log tailer] ip: Fast-forwarding stream 
> 'http://ip/getJournal?jid=ns1003=232077264498=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH'
>  to transaction ID 232056751662 [2019-07-17T11:50:21.061+08:00] [ERROR] [Edit 
> log tailer] : Unknown error encountered while tailing edits. Shutting down 
> standby NN. java.io.IOException: There appears to be a gap in the edit log. 
> We expected txid 232056752162, but got txid 232077264498. at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:239)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:161)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:895) at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:321)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:414)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
>  [2019-07-17T11:50:21.064+08:00] [INFO] [Edit log tailer] : Exiting with 
> status 1 [2019-07-17T11:50:21.066+08:00] [INFO] [Thread-1] : SHUTDOWN_MSG: 
> / SHUTDOWN_MSG: 
> Shutting down NameNode at ip 
> /
> {code}
>  
> if dfs.ha.tail-edits.max-txns-per-lock value is 500,when the namenode load 
> the editlog util 500,the current namenode will load the next editlog,but 
> editlog more than 500.So,namenode got an unexpected txid when tail editlog.
>  
>  
> {code:java}
> //
> [2019-07-17T11:50:21.059+08:00] [INFO] [Edit log tailer] : Edits file 
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> http://ip/getJournal?jid=ns1003=232056426162=-63%3A1902204348%3A0%3ACID-hope-20180214-20161018-SQYH,
>  
> 

[jira] [Commented] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901598#comment-16901598
 ] 

Hudson commented on HDDS-1919:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17054 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17054/])
HDDS-1919. Fix Javadoc in TestAuditParser (#1240) (bharat: rev 
38e6968647fd06bb6d7bcd00d6351ec63e9ac132)
* (edit) 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java


> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290128=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290128
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 07/Aug/19 01:07
Start Date: 07/Aug/19 01:07
Worklog Time Spent: 10m 
  Work Description: pingsutw commented on issue #1240: HDDS-1919. Fix 
Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#issuecomment-518898781
 
 
   @bharatviswa504 Thank you so much 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290128)
Time Spent: 1.5h  (was: 1h 20m)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290126=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290126
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 07/Aug/19 01:03
Start Date: 07/Aug/19 01:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1238: HDDS-1921. 
TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238#issuecomment-518898029
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 128 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 826 | trunk passed |
   | +1 | compile | 462 | trunk passed |
   | +1 | checkstyle | 106 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1178 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 226 | trunk passed |
   | 0 | spotbugs | 520 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 782 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 722 | the patch passed |
   | +1 | compile | 445 | the patch passed |
   | +1 | javac | 445 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 735 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 193 | the patch passed |
   | +1 | findbugs | 663 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 342 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2424 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 64 | The patch does not generate ASF License warnings. |
   | | | 9555 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.0 Server=19.03.0 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1238 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f59460a8e48f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8cef9f8 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/testReport/ |
   | Max. process+thread count | 4111 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1238/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290126)
Time Spent: 50m  (was: 40m)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  

[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290125=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290125
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 07/Aug/19 01:00
Start Date: 07/Aug/19 01:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1240: 
HDDS-1919. Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290125)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1919:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290124
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 07/Aug/19 01:00
Start Date: 07/Aug/19 01:00
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1240: HDDS-1919. Fix 
Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#issuecomment-518897505
 
 
   Merging this without CI, as it updates only Javadoc comment.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290124)
Time Spent: 1h 10m  (was: 1h)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1907) TestOzoneRpcClientWithRatis is failing with ACL errors

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1907?focusedWorklogId=290106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290106
 ]

ASF GitHub Bot logged work on HDDS-1907:


Author: ASF GitHub Bot
Created on: 07/Aug/19 00:41
Start Date: 07/Aug/19 00:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1239: HDDS-1907. 
TestOzoneRpcClientWithRatis is failing with ACL errors. Co…
URL: https://github.com/apache/hadoop/pull/1239#issuecomment-518894346
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 616 | trunk passed |
   | +1 | compile | 360 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 814 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   | 0 | spotbugs | 418 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 617 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 546 | the patch passed |
   | +1 | compile | 371 | the patch passed |
   | +1 | javac | 371 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 678 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 636 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 297 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1889 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 7554 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1239 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e333bede783b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 954ff36 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/testReport/ |
   | Max. process+thread count | 5390 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290106)
Time Spent: 0.5h  (was: 20m)

> TestOzoneRpcClientWithRatis is failing with ACL errors
> --
>
> Key: HDDS-1907
> URL: https://issues.apache.org/jira/browse/HDDS-1907
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: 

[jira] [Updated] (HDFS-12914) Block report leases cause missing blocks until next report

2019-08-06 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-12914:
---
Attachment: HDFS-12914.branch-2.8.002.patch

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Fix For: 2.10.0, 3.0.4, 3.3.0, 3.2.1, 2.9.3, 3.1.3
>
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch, HDFS-12914.007.patch, HDFS-12914.008.patch, 
> HDFS-12914.009.patch, HDFS-12914.branch-2.000.patch, 
> HDFS-12914.branch-2.001.patch, HDFS-12914.branch-2.002.patch, 
> HDFS-12914.branch-2.8.001.patch, HDFS-12914.branch-2.8.002.patch, 
> HDFS-12914.branch-2.patch, HDFS-12914.branch-3.0.patch, 
> HDFS-12914.branch-3.1.001.patch, HDFS-12914.branch-3.1.002.patch, 
> HDFS-12914.branch-3.2.patch, HDFS-12914.utfix.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290104
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 07/Aug/19 00:34
Start Date: 07/Aug/19 00:34
Worklog Time Spent: 10m 
  Work Description: pingsutw commented on pull request #1240: HDDS-1919. 
Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#discussion_r311328200
 
 

 ##
 File path: 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
 ##
 @@ -42,9 +42,9 @@
 import java.util.List;
 
 /**
- * Tests GenerateOzoneRequiredConfigurations.
+ * Tests TestAuditParser.
  */
-public class TestAuditParser {
+public class AuditParser {
 
 Review comment:
   Sorry for my mistake, I have updated patch  
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290104)
Time Spent: 1h  (was: 50m)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14705) Remove unused configuration dfs.min.replication

2019-08-06 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901581#comment-16901581
 ] 

Wei-Chiu Chuang commented on HDFS-14705:


LGTM +1 

> Remove unused configuration dfs.min.replication
> ---
>
> Key: HDFS-14705
> URL: https://issues.apache.org/jira/browse/HDFS-14705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: CR Hota
>Priority: Trivial
> Attachments: HDFS-14705.001.patch
>
>
> A few HDFS tests sets a configuration property dfs.min.replication. This is 
> not being used anywhere in the code. It doesn't seem like a leftover from 
> legacy code either. Better to clean them out. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1907) TestOzoneRpcClientWithRatis is failing with ACL errors

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1907?focusedWorklogId=290103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290103
 ]

ASF GitHub Bot logged work on HDDS-1907:


Author: ASF GitHub Bot
Created on: 07/Aug/19 00:29
Start Date: 07/Aug/19 00:29
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1239: HDDS-1907. 
TestOzoneRpcClientWithRatis is failing with ACL errors. Co…
URL: https://github.com/apache/hadoop/pull/1239#issuecomment-518892268
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 81 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 14 | hadoop-ozone in trunk failed. |
   | +1 | compile | 420 | trunk passed |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 842 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 164 | trunk passed |
   | 0 | spotbugs | 440 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 638 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 543 | the patch passed |
   | +1 | compile | 381 | the patch passed |
   | +1 | javac | 381 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 59 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 600  line(s) with tabs. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | the patch passed |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 291 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1914 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 7183 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.om.TestScmSafeMode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1239 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3610886b8fcf 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 954ff36 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/testReport/ |
   | Max. process+thread count | 5369 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1239/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[jira] [Commented] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901573#comment-16901573
 ] 

Hudson commented on HDDS-1918:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17052 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17052/])
HDDS-1918. hadoop-ozone-tools has integration tests run as unit (#1236) 
(bharat: rev c4d97ae500606d28b67596d2842038e842d42067)
* (edit) hadoop-ozone/dev-support/checks/integration.sh
* (edit) hadoop-ozone/dev-support/checks/unit.sh


> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1915) Remove hadoop script from ozone distribution

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1915?focusedWorklogId=290086=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290086
 ]

ASF GitHub Bot logged work on HDDS-1915:


Author: ASF GitHub Bot
Created on: 07/Aug/19 00:06
Start Date: 07/Aug/19 00:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1233: HDDS-1915. 
Remove hadoop script from ozone distribution
URL: https://github.com/apache/hadoop/pull/1233#issuecomment-518887905
 
 
   @arp7 can you also take a look at this change.
   I will wait for a day for others to review if no more comments I will commit 
it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290086)
Time Spent: 0.5h  (was: 20m)

> Remove hadoop script from ozone distribution
> 
>
> Key: HDDS-1915
> URL: https://issues.apache.org/jira/browse/HDDS-1915
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> /bin/hadoop script is included in the ozone distribution even if we a 
> dedicated /bin/ozone
> [~arp] reported that it can be confusing, for example "hadoop classpath" 
> returns with a bad classpath (ozone classpath ) should be used 
> instead.
> To avoid such confusions I suggest to remove the hadoop script from 
> distribution as ozone script already provides all the functionalities.
> It also helps as to reduce the dependencies between hadoop 3.2-SNAPSHOT and 
> ozone as we use the snapshot hadoop script as of now.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1918:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1918?focusedWorklogId=290083=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290083
 ]

ASF GitHub Bot logged work on HDDS-1918:


Author: ASF GitHub Bot
Created on: 07/Aug/19 00:01
Start Date: 07/Aug/19 00:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1236: 
HDDS-1918. hadoop-ozone-tools has integration tests run as unit
URL: https://github.com/apache/hadoop/pull/1236
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290083)
Time Spent: 40m  (was: 0.5h)

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1918) hadoop-ozone-tools has integration tests run as unit

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1918?focusedWorklogId=290084=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290084
 ]

ASF GitHub Bot logged work on HDDS-1918:


Author: ASF GitHub Bot
Created on: 07/Aug/19 00:01
Start Date: 07/Aug/19 00:01
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1236: HDDS-1918. 
hadoop-ozone-tools has integration tests run as unit
URL: https://github.com/apache/hadoop/pull/1236#issuecomment-518886984
 
 
   Thank You @adoroszlai for the contribution.
   I have committed this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290084)
Time Spent: 50m  (was: 40m)

> hadoop-ozone-tools has integration tests run as unit
> 
>
> Key: HDDS-1918
> URL: https://issues.apache.org/jira/browse/HDDS-1918
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HDDS-1735 created separate test runner scripts for unit and integration tests.
> Problem: {{hadoop-ozone-tools}} tests are currently run as part of the unit 
> tests, but most of them start a {{MiniOzoneCluster}}, which is defined in 
> {{hadoop-ozone-integration-test}}.  Thus I think these tests are really 
> integration tests, and should be run by {{integration.sh}} instead.  There 
> are currently only 3 real unit tests:
> {noformat}
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestProgressBar.java
> hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/genconf/TestGenerateOzoneRequiredConfigurations.java
> {noformat}
> {{hadoop-ozone-tools}} tests take ~6 minutes.
> Possible solutions in order of increasing complexity:
> # Run {{hadoop-ozone-tools}} tests in {{integration.sh}} instead of 
> {{unit.sh}} (This is similar to {{hadoop-ozone-filesystem}}, which is already 
> run by {{integration.sh}} and has 2 real unit tests.)
> # Move all integration test classes to the {{hadoop-ozone-integration-test}} 
> module, and make it depend on {{hadoop-ozone-tools}} and 
> {{hadoop-ozone-filesystem}} instead of the other way around.
> # Rename integration test classes to {{\*IT.java}} or {{IT\*.java}}, add 
> filters for Surefire runs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1916) Only contract tests are run in ozonefs module

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1916?focusedWorklogId=290082=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290082
 ]

ASF GitHub Bot logged work on HDDS-1916:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:59
Start Date: 06/Aug/19 23:59
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1235: HDDS-1916. 
Only contract tests are run in ozonefs module
URL: https://github.com/apache/hadoop/pull/1235#issuecomment-518886605
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290082)
Time Spent: 40m  (was: 0.5h)

> Only contract tests are run in ozonefs module
> -
>
> Key: HDDS-1916
> URL: https://issues.apache.org/jira/browse/HDDS-1916
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.3.0
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {{hadoop-ozone-filesystem}} has 6 test classes that are not being run:
> {code}
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestFilteredClassLoader.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSInputStream.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystem.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileSystemWithMocks.java
> hadoop-ozone/ozonefs/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFsRenameDir.java
> {code}
> {code:title=https://raw.githubusercontent.com/elek/ozone-ci/master/byscane/byscane-nightly-vxsck/integration/output.log}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.956 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDelete
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.528 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractMkdir
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 42.245 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractSeek
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.996 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractOpen
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.816 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRename
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.418 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractDistCp
> [INFO] Running 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 35.042 s - in 
> org.apache.hadoop.fs.ozone.contract.ITestOzoneContractGetFileStatus
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [WARNING] Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 
> 35.144 s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractCreate
> [INFO] Running org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.986 
> s - in org.apache.hadoop.fs.ozone.contract.ITestOzoneContractRootDir
> [INFO] 
> [INFO] Results:
> [INFO] 
> [WARNING] Tests run: 92, Failures: 0, Errors: 0, Skipped: 2
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290081
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:58
Start Date: 06/Aug/19 23:58
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1240: 
HDDS-1919. Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#discussion_r311321684
 
 

 ##
 File path: 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
 ##
 @@ -42,9 +42,9 @@
 import java.util.List;
 
 /**
- * Tests GenerateOzoneRequiredConfigurations.
+ * Tests TestAuditParser.
  */
-public class TestAuditParser {
+public class AuditParser {
 
 Review comment:
   Hi @pingsutw 
   The class name should be TestAuditParser.
   Comment for the class should be Test AuditParser.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290081)
Time Spent: 50m  (was: 40m)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14705) Remove unused configuration dfs.min.replication

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901563#comment-16901563
 ] 

Hadoop QA commented on HDFS-14705:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14705 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976859/HDFS-14705.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5a7194c57c4f 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 22430c1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27421/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27421/testReport/ |
| Max. process+thread count | 4094 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27421/console |
| Powered by | Apache 

[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=290068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290068
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:40
Start Date: 06/Aug/19 23:40
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1147: 
HDDS-1619. Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#discussion_r311318331
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/volume/acl/OMVolumeAddAclRequest.java
 ##
 @@ -0,0 +1,110 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.request.volume.acl;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hdds.scm.storage.CheckedBiFunction;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.volume.OMVolumeAclOpResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMResponse;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Handles volume add acl request.
+ */
+public class OMVolumeAddAclRequest extends OMVolumeAclRequest {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMVolumeAddAclRequest.class);
+
+  private static CheckedBiFunction,
+  OmVolumeArgs, IOException> volumeAddAclOp;
+
+  static {
+volumeAddAclOp = (acls, volArgs) -> volArgs.addAcl(acls.get(0));
+  }
+
+  private List ozoneAcls;
+  private String volumeName;
+
+  public OMVolumeAddAclRequest(OMRequest omRequest) {
+super(omRequest, volumeAddAclOp);
+OzoneManagerProtocolProtos.AddAclRequest addAclRequest =
+getOmRequest().getAddAclRequest();
+Preconditions.checkNotNull(addAclRequest);
+ozoneAcls = Lists.newArrayList(
+OzoneAcl.fromProtobuf(addAclRequest.getAcl()));
+volumeName = addAclRequest.getObj().getPath().substring(1);
+  }
+
+  @Override
+  public List getAcls() {
+return ozoneAcls;
+  }
+
+  @Override
+  public String getVolumeName() {
+return volumeName;
+  }
+
+  private OzoneAcl getAcl() {
+return ozoneAcls.get(0);
+  }
+
+
+  @Override
+  OMResponse.Builder onInit() {
+return OMResponse.newBuilder().setCmdType(
+OzoneManagerProtocolProtos.Type.AddAcl)
+.setStatus(OzoneManagerProtocolProtos.Status.OK).setSuccess(true);
+  }
+
+  @Override
+  OMClientResponse onSuccess(OMResponse.Builder omResponse,
+  OmVolumeArgs omVolumeArgs, boolean result){
+omResponse.setAddAclResponse(OzoneManagerProtocolProtos.AddAclResponse
+.newBuilder().setResponse(result).build());
+return new OMVolumeAclOpResponse(omVolumeArgs, omResponse.build());
+  }
+
+  @Override
+  OMClientResponse onFailure(OMResponse.Builder omResponse,
+  IOException ex) {
+return new OMVolumeAclOpResponse(null,
+createErrorOMResponse(omResponse, ex));
+  }
+
+  @Override
+  void onComplete(IOException ex) {
 
 Review comment:
   We should setSucess with operationResult flag, because onInit() sets it to 
true.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290068)
Time Spent: 9.5h  (was: 9h 20m)

> Support volume acl operations for OM HA.
> 
>
> Key: HDDS-1619
> URL: 

[jira] [Work logged] (HDDS-1619) Support volume acl operations for OM HA.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1619?focusedWorklogId=290065=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290065
 ]

ASF GitHub Bot logged work on HDDS-1619:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:37
Start Date: 06/Aug/19 23:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1147: HDDS-1619. 
Support volume acl operations for OM HA. Contributed by…
URL: https://github.com/apache/hadoop/pull/1147#issuecomment-518882223
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 582 | trunk passed |
   | +1 | compile | 360 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 805 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | trunk passed |
   | 0 | spotbugs | 419 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 606 | trunk passed |
   | -0 | patch | 456 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 565 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | +1 | checkstyle | 74 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 691 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 290 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2975 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8582 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestMultiBlockWritesWithDnFailures |
   |   | hadoop.ozone.client.rpc.Test2WayCommitInRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1147 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 21d94a3cc585 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 22430c1 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/testReport/ |
   | Max. process+thread count | 4167 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1147/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290065)
Time Spent: 9h 20m  (was: 9h 10m)

> Support volume acl operations for OM HA.
> 
>
> Key: HDDS-1619
>  

[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290063=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290063
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:32
Start Date: 06/Aug/19 23:32
Worklog Time Spent: 10m 
  Work Description: pingsutw commented on pull request #1240: HDDS-1919. 
Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#discussion_r311316711
 
 

 ##
 File path: 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
 ##
 @@ -42,7 +42,7 @@
 import java.util.List;
 
 /**
- * Tests GenerateOzoneRequiredConfigurations.
+ * Tests TestAuditParser.
 
 Review comment:
   @bharatviswa504 Thanks for your review 
   I already updated my patch 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290063)
Time Spent: 40m  (was: 0.5h)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290057
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:25
Start Date: 06/Aug/19 23:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1240: 
HDDS-1919. Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#discussion_r311315299
 
 

 ##
 File path: 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
 ##
 @@ -42,7 +42,7 @@
 import java.util.List;
 
 /**
- * Tests GenerateOzoneRequiredConfigurations.
+ * Tests TestAuditParser.
 
 Review comment:
   It should be mentioned as AuditParser.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290057)
Time Spent: 0.5h  (was: 20m)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290056
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:25
Start Date: 06/Aug/19 23:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1240: 
HDDS-1919. Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240#discussion_r311315299
 
 

 ##
 File path: 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/audit/parser/TestAuditParser.java
 ##
 @@ -42,7 +42,7 @@
 import java.util.List;
 
 /**
- * Tests GenerateOzoneRequiredConfigurations.
+ * Tests TestAuditParser.
 
 Review comment:
   It should be AuditParser.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290056)
Time Spent: 20m  (was: 10m)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1919:
-
Labels: newbie pull-request-available  (was: newbie)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?focusedWorklogId=290055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290055
 ]

ASF GitHub Bot logged work on HDDS-1919:


Author: ASF GitHub Bot
Created on: 06/Aug/19 23:21
Start Date: 06/Aug/19 23:21
Worklog Time Spent: 10m 
  Work Description: pingsutw commented on pull request #1240: HDDS-1919. 
Fix Javadoc in TestAuditParser
URL: https://github.com/apache/hadoop/pull/1240
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290055)
Time Spent: 10m
Remaining Estimate: 0h

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1919) Fix Javadoc in TestAuditParser

2019-08-06 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su updated HDDS-1919:
---
Status: Patch Available  (was: Open)

> Fix Javadoc in TestAuditParser
> --
>
> Key: HDDS-1919
> URL: https://issues.apache.org/jira/browse/HDDS-1919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: kevin su
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Javadoc for TestAuditParser is mentions incorrect class name.
> {code:java}
> /**
>  * Tests GenerateOzoneRequiredConfigurations.
>  */
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901544#comment-16901544
 ] 

Hadoop QA commented on HDFS-12914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_222. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_222. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:b93746a |
| JIRA Issue | HDFS-12914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976873/HDFS-12914.branch-2.8.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3ec22f8c62ea 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 8e302a0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| Multi-JDK versions |  /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_95 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290046=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290046
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 06/Aug/19 22:57
Start Date: 06/Aug/19 22:57
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311309493
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) {
 LOG.error("Unexpected exception while updating key data : {} {}",
 updatedKey, e.getMessage());
 return new ImmutablePair<>(getTaskName(), false);
-  } finally {
-populateFileCountBySizeDB();
   }
+  populateFileCountBySizeDB();
 }
 LOG.info("Completed a 'process' run of FileSizeCountTask.");
 return new ImmutablePair<>(getTaskName(), true);
   }
 
   /**
* Calculate the bin index based on size of the Key.
+   * index is calculated as the number of right shifts
+   * needed until dataSize becomes zero.
*
* @param dataSize Size of the key.
* @return int bin index in upperBoundCount
*/
-  private int calcBinIndex(long dataSize) {
-if(dataSize >= maxFileSizeUpperBound) {
-  return Integer.MIN_VALUE;
-} else if (dataSize > SIZE_512_TB) {
-  //given the small difference in 512TB and 512TB + 1B, index for both 
would
-  //return same, to differentiate specific condition added.
-  return maxBinSize - 1;
-}
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if(logValue < 10){
-  return 0;
-} else{
-  return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10;
+  int calculateBinIndex(long dataSize) {
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 }
+return index < 10 ? 0 : index - 10;
   }
 
-  private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{
-int index = calcBinIndex(omKeyInfo.getDataSize());
-if(index == Integer.MIN_VALUE) {
-  throw new IOException("File Size larger than permissible file size "
-  + maxFileSizeUpperBound +" bytes");
+  void countFileSize(OmKeyInfo omKeyInfo) {
+int index;
+if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) {
+  index = maxBinSize - 1;
+} else {
+  index = calculateBinIndex(omKeyInfo.getDataSize());
 }
 upperBoundCount[index]++;
   }
 
-  private void populateFileCountBySizeDB() {
+  /**
+   * Populate DB with the counts of file sizes calculated
+   * using the dao.
+   *
+   */
+  void populateFileCountBySizeDB() {
 for (int i = 0; i < upperBoundCount.length; i++) {
   long fileSizeUpperBound = (long) Math.pow(2, (10 + i));
   FileCountBySize fileCountRecord =
   fileCountBySizeDao.findById(fileSizeUpperBound);
   FileCountBySize newRecord = new
   FileCountBySize(fileSizeUpperBound, upperBoundCount[i]);
-  if(fileCountRecord == null){
+  if (fileCountRecord == null) {
 
 Review comment:
   Yes, it should be `LONG.MAX_VALUE`.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290046)
Time Spent: 7h  (was: 6h 50m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org


[jira] [Work logged] (HDDS-1921) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1921?focusedWorklogId=290045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290045
 ]

ASF GitHub Bot logged work on HDDS-1921:


Author: ASF GitHub Bot
Created on: 06/Aug/19 22:56
Start Date: 06/Aug/19 22:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1238: 
HDDS-1921. TestOzoneManagerDoubleBufferWithOMResponse is flaky
URL: https://github.com/apache/hadoop/pull/1238#discussion_r311309048
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerDoubleBufferWithOMResponse.java
 ##
 @@ -345,21 +345,23 @@ public void testDoubleBuffer(int iterations, int 
bucketCount)
   }
 
   // We are doing +1 for volume transaction.
-  GenericTestUtils.waitFor(() ->
-  doubleBuffer.getFlushedTransactionCount() ==
-  (bucketCount + 1) * iterations, 100,
-  12);
+  long expectedTransactions = (bucketCount + 1) * iterations;
+  GenericTestUtils.waitFor(() -> lastAppliedIndex == expectedTransactions,
+  100, 12);
 
-  Assert.assertTrue(omMetadataManager.countRowsInTable(
-  omMetadataManager.getVolumeTable()) == iterations);
+  Assert.assertEquals(expectedTransactions,
+  doubleBuffer.getFlushedTransactionCount()
+  );
 
-  Assert.assertTrue(omMetadataManager.countRowsInTable(
-  omMetadataManager.getBucketTable()) == (bucketCount) * iterations);
+  Assert.assertEquals(iterations,
+  
omMetadataManager.countRowsInTable(omMetadataManager.getVolumeTable())
+  );
 
-  Assert.assertTrue(doubleBuffer.getFlushIterations() > 0);
+  Assert.assertEquals(bucketCount * iterations,
+  
omMetadataManager.countRowsInTable(omMetadataManager.getBucketTable())
+  );
 
-  // Check lastAppliedIndex is updated correctly or not.
-  Assert.assertEquals((bucketCount + 1) * iterations, lastAppliedIndex);
 
 Review comment:
   Why this check lastAppliedIndex is removed?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290045)
Time Spent: 40m  (was: 0.5h)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-1921
> URL: https://issues.apache.org/jira/browse/HDDS-1921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat:title=https://ci.anzix.net/job/ozone/17588/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<9>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}
> {noformat:title=https://ci.anzix.net/job/ozone/17587/testReport/org.apache.hadoop.ozone.om.ratis/TestOzoneManagerDoubleBufferWithOMResponse/unit___testDoubleBuffer/}
> java.lang.AssertionError: expected:<11> but was:<3>
> ...
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:362)
>   at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:104)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290044
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 06/Aug/19 22:55
Start Date: 06/Aug/19 22:55
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308884
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) {
 LOG.error("Unexpected exception while updating key data : {} {}",
 updatedKey, e.getMessage());
 return new ImmutablePair<>(getTaskName(), false);
-  } finally {
-populateFileCountBySizeDB();
   }
+  populateFileCountBySizeDB();
 }
 LOG.info("Completed a 'process' run of FileSizeCountTask.");
 return new ImmutablePair<>(getTaskName(), true);
   }
 
   /**
* Calculate the bin index based on size of the Key.
+   * index is calculated as the number of right shifts
+   * needed until dataSize becomes zero.
*
* @param dataSize Size of the key.
* @return int bin index in upperBoundCount
*/
-  private int calcBinIndex(long dataSize) {
-if(dataSize >= maxFileSizeUpperBound) {
-  return Integer.MIN_VALUE;
-} else if (dataSize > SIZE_512_TB) {
-  //given the small difference in 512TB and 512TB + 1B, index for both 
would
-  //return same, to differentiate specific condition added.
-  return maxBinSize - 1;
-}
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if(logValue < 10){
-  return 0;
-} else{
-  return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10;
+  int calculateBinIndex(long dataSize) {
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 }
+return index < 10 ? 0 : index - 10;
   }
 
-  private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{
-int index = calcBinIndex(omKeyInfo.getDataSize());
-if(index == Integer.MIN_VALUE) {
-  throw new IOException("File Size larger than permissible file size "
-  + maxFileSizeUpperBound +" bytes");
+  void countFileSize(OmKeyInfo omKeyInfo) {
+int index;
+if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) {
+  index = maxBinSize - 1;
+} else {
+  index = calculateBinIndex(omKeyInfo.getDataSize());
 }
 upperBoundCount[index]++;
   }
 
-  private void populateFileCountBySizeDB() {
+  /**
+   * Populate DB with the counts of file sizes calculated
+   * using the dao.
+   *
+   */
+  void populateFileCountBySizeDB() {
 for (int i = 0; i < upperBoundCount.length; i++) {
   long fileSizeUpperBound = (long) Math.pow(2, (10 + i));
   FileCountBySize fileCountRecord =
   fileCountBySizeDao.findById(fileSizeUpperBound);
   FileCountBySize newRecord = new
   FileCountBySize(fileSizeUpperBound, upperBoundCount[i]);
-  if(fileCountRecord == null){
+  if (fileCountRecord == null) {
 
 Review comment:
   Sure, it is an extra bin to add files > maxFileSizeUpperBound. 
   Also, did you mean LONG.MAX_VALUE? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290044)
Time Spent: 6h 50m  (was: 6h 40m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: 

[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290043=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290043
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 06/Aug/19 22:52
Start Date: 06/Aug/19 22:52
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308096
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) {
 LOG.error("Unexpected exception while updating key data : {} {}",
 updatedKey, e.getMessage());
 return new ImmutablePair<>(getTaskName(), false);
-  } finally {
-populateFileCountBySizeDB();
   }
+  populateFileCountBySizeDB();
 }
 LOG.info("Completed a 'process' run of FileSizeCountTask.");
 return new ImmutablePair<>(getTaskName(), true);
   }
 
   /**
* Calculate the bin index based on size of the Key.
+   * index is calculated as the number of right shifts
+   * needed until dataSize becomes zero.
*
* @param dataSize Size of the key.
* @return int bin index in upperBoundCount
*/
-  private int calcBinIndex(long dataSize) {
-if(dataSize >= maxFileSizeUpperBound) {
-  return Integer.MIN_VALUE;
-} else if (dataSize > SIZE_512_TB) {
-  //given the small difference in 512TB and 512TB + 1B, index for both 
would
-  //return same, to differentiate specific condition added.
-  return maxBinSize - 1;
-}
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if(logValue < 10){
-  return 0;
-} else{
-  return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10;
+  int calculateBinIndex(long dataSize) {
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 }
+return index < 10 ? 0 : index - 10;
 
 Review comment:
   Sure.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290043)
Time Spent: 6h 40m  (was: 6.5h)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Ozone users may want to track the number of small files they have in their 
> cluster and where they are present. Recon can help them with the information 
> by iterating the OM Key Table and dividing the keys into different buckets 
> based on the data size. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1366) Add ability in Recon to track the number of small files in an Ozone cluster.

2019-08-06 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1366?focusedWorklogId=290042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-290042
 ]

ASF GitHub Bot logged work on HDDS-1366:


Author: ASF GitHub Bot
Created on: 06/Aug/19 22:52
Start Date: 06/Aug/19 22:52
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #1146: 
HDDS-1366. Add ability in Recon to track the number of small files in an Ozone 
Cluster
URL: https://github.com/apache/hadoop/pull/1146#discussion_r311308069
 
 

 ##
 File path: 
hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/FileSizeCountTask.java
 ##
 @@ -155,70 +164,70 @@ private void fetchUpperBoundCount(String type) {
 LOG.error("Unexpected exception while updating key data : {} {}",
 updatedKey, e.getMessage());
 return new ImmutablePair<>(getTaskName(), false);
-  } finally {
-populateFileCountBySizeDB();
   }
+  populateFileCountBySizeDB();
 }
 LOG.info("Completed a 'process' run of FileSizeCountTask.");
 return new ImmutablePair<>(getTaskName(), true);
   }
 
   /**
* Calculate the bin index based on size of the Key.
+   * index is calculated as the number of right shifts
+   * needed until dataSize becomes zero.
*
* @param dataSize Size of the key.
* @return int bin index in upperBoundCount
*/
-  private int calcBinIndex(long dataSize) {
-if(dataSize >= maxFileSizeUpperBound) {
-  return Integer.MIN_VALUE;
-} else if (dataSize > SIZE_512_TB) {
-  //given the small difference in 512TB and 512TB + 1B, index for both 
would
-  //return same, to differentiate specific condition added.
-  return maxBinSize - 1;
-}
-int logValue = (int) Math.ceil(Math.log(dataSize)/Math.log(2));
-if(logValue < 10){
-  return 0;
-} else{
-  return (dataSize % ONE_KB == 0) ? logValue - 10 + 1: logValue - 10;
+  int calculateBinIndex(long dataSize) {
+int index = 0;
+while(dataSize != 0) {
+  dataSize >>= 1;
+  index += 1;
 }
+return index < 10 ? 0 : index - 10;
   }
 
-  private void countFileSize(OmKeyInfo omKeyInfo) throws IOException{
-int index = calcBinIndex(omKeyInfo.getDataSize());
-if(index == Integer.MIN_VALUE) {
-  throw new IOException("File Size larger than permissible file size "
-  + maxFileSizeUpperBound +" bytes");
+  void countFileSize(OmKeyInfo omKeyInfo) {
+int index;
+if (omKeyInfo.getDataSize() >= maxFileSizeUpperBound) {
+  index = maxBinSize - 1;
+} else {
+  index = calculateBinIndex(omKeyInfo.getDataSize());
 }
 upperBoundCount[index]++;
   }
 
-  private void populateFileCountBySizeDB() {
+  /**
+   * Populate DB with the counts of file sizes calculated
+   * using the dao.
+   *
+   */
+  void populateFileCountBySizeDB() {
 for (int i = 0; i < upperBoundCount.length; i++) {
   long fileSizeUpperBound = (long) Math.pow(2, (10 + i));
   FileCountBySize fileCountRecord =
   fileCountBySizeDao.findById(fileSizeUpperBound);
   FileCountBySize newRecord = new
   FileCountBySize(fileSizeUpperBound, upperBoundCount[i]);
-  if(fileCountRecord == null){
+  if (fileCountRecord == null) {
 fileCountBySizeDao.insert(newRecord);
-  } else{
+  } else {
 fileCountBySizeDao.update(newRecord);
   }
 }
   }
 
   private void updateUpperBoundCount(OmKeyInfo value, String operation)
   throws IOException {
-int binIndex = calcBinIndex(value.getDataSize());
-if(binIndex == Integer.MIN_VALUE) {
+int binIndex = calculateBinIndex(value.getDataSize());
+if (binIndex == Integer.MIN_VALUE) {
 
 Review comment:
   Yes, it was from a previous check where there was an exception for fileSize 
> permitted value of 1 B. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 290042)
Time Spent: 6.5h  (was: 6h 20m)

> Add ability in Recon to track the number of small files in an Ozone cluster.
> 
>
> Key: HDDS-1366
> URL: https://issues.apache.org/jira/browse/HDDS-1366
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Ozone 

[jira] [Commented] (HDFS-14674) [SBN read] Got an unexpected txid when tail editlog

2019-08-06 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901537#comment-16901537
 ] 

Hadoop QA commented on HDFS-14674:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 175 unchanged - 0 fixed = 176 total (was 175) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HDFS-14674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976793/HDFS-14674-004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d4e6f0d5a052 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b77761b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27420/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27420/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27420/testReport/ |
| 

[jira] [Commented] (HDFS-14692) Upload button should not encode complete url

2019-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901536#comment-16901536
 ] 

Hudson commented on HDFS-14692:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17051 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17051/])
HDFS-14692. Upload button should not encode complete url. Contributed by 
(weichiu: rev 954ff36360e083010f395cb9d3bd46701417e7b7)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js


> Upload button should not encode complete url
> 
>
> Key: HDFS-14692
> URL: https://issues.apache.org/jira/browse/HDFS-14692
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14692.001.patch
>
>
> explorer.js#modal-upload-file-button currently does not work with knox. The 
> function encodes the complete url and thus creates a malformed url. This 
> leads to an error while uploading the file.
> Example of malformed url - 
> "https%3A//127.0.0.1%3A/gateway/default/webhdfs/v1/app-logs/BUILDING.txt?op=CREATE=true"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-08-06 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901535#comment-16901535
 ] 

Hudson commented on HDFS-14652:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17051 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17051/])
HDFS-14652. Addendum: HealthMonitor connection retry times should be (weichiu: 
rev 8cef9f89f4218971199363f1809401c8305ede9b)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch, 
> HDFS-14652.003.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   >