[GitHub] [hbase] saintstack opened a new pull request #734: HBASE-23192 CatalogJanitor consistencyCheck does not log problematic …

2019-10-18 Thread GitBox
saintstack opened a new pull request #734: HBASE-23192 CatalogJanitor 
consistencyCheck does not log problematic …
URL: https://github.com/apache/hbase/pull/734
 
 
   …row on exception
   
   Adds logging of row and complaint if consistency check fails during CJ
   checking. Adds a few more null checks. Does edit on the 'HBCK Report'
   top line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23192) CatalogJanitor consistencyCheck does not log problematic row on exception

2019-10-18 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955077#comment-16955077
 ] 

Michael Stack commented on HBASE-23192:
---

Here is exception I ran into:

{code}
2019-10-18 18:38:54,790 ERROR org.apache.hadoop.hbase.ScheduledChore: Caught 
error
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.master.CatalogJanitor$ReportMakingVisitor.checkServer(CatalogJanitor.java:700)
at 
org.apache.hadoop.hbase.master.CatalogJanitor$ReportMakingVisitor.metaTableConsistencyCheck(CatalogJanitor.java:606)
at 
org.apache.hadoop.hbase.master.CatalogJanitor$ReportMakingVisitor.visit(CatalogJanitor.java:574)
at 
org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:802)
at 
org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:767)
at 
org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:720)
at 
org.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:715)
at 
org.apache.hadoop.hbase.master.CatalogJanitor.scanForReport(CatalogJanitor.java:230)
at 
org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:172)
at 
org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:140)
at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at 
java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at 
org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)

{code}



> CatalogJanitor consistencyCheck does not log problematic row on exception
> -
>
> Key: HBASE-23192
> URL: https://issues.apache.org/jira/browse/HBASE-23192
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 2.3.0, 2.1.8, 2.3.3
>
>
> Small stuff. Trying to fix a cluser, cleared an info:server field. Damaged 
> hbase:meta for CatalogJanitor; when it should have just logged and skipped 
> the bad entity when doing consistency check, instead CJ crashed. Also doesn't 
> log bad raw which would help debugging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23192) CatalogJanitor consistencyCheck does not log problematic row on exception

2019-10-18 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23192:
--
Description: Small stuff. Trying to fix a cluser, cleared an info:server 
field. Damaged hbase:meta for CatalogJanitor; when it should have just logged 
and skipped the bad entity when doing consistency check, instead CJ crashed. 
Also doesn't log bad raw which would help debugging.  (was: Small stuff. Trying 
to fix a cluser, cleared an info:server field. Damaged CatalogJanitor when it 
should have just logged and skipped the bad entity. Also doesn't log bad raw 
which would help debugging.)

> CatalogJanitor consistencyCheck does not log problematic row on exception
> -
>
> Key: HBASE-23192
> URL: https://issues.apache.org/jira/browse/HBASE-23192
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 2.3.0, 2.1.8, 2.3.3
>
>
> Small stuff. Trying to fix a cluser, cleared an info:server field. Damaged 
> hbase:meta for CatalogJanitor; when it should have just logged and skipped 
> the bad entity when doing consistency check, instead CJ crashed. Also doesn't 
> log bad raw which would help debugging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23192) CatalogJanitor consistencyCheck does not log problematic row on exception

2019-10-18 Thread Michael Stack (Jira)
Michael Stack created HBASE-23192:
-

 Summary: CatalogJanitor consistencyCheck does not log problematic 
row on exception
 Key: HBASE-23192
 URL: https://issues.apache.org/jira/browse/HBASE-23192
 Project: HBase
  Issue Type: Bug
  Components: hbck2
Affects Versions: 2.1.7, 2.2.2
Reporter: Michael Stack
Assignee: Michael Stack
 Fix For: 2.3.3, 2.3.0, 2.1.8


Small stuff. Trying to fix a cluser, cleared an info:server field. Damaged 
CatalogJanitor when it should have just logged and skipped the bad entity. Also 
doesn't log bad raw which would help debugging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955033#comment-16955033
 ] 

Hudson commented on HBASE-23170:


Results for branch branch-2.2
[build #666 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/666/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/666//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/666//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/666//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
> -
>
> Key: HBASE-23170
> URL: https://issues.apache.org/jira/browse/HBASE-23170
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Admin#getRegionServers returns the server names.
> ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and 
> metrics, while the metrics are not useful for Admin#getRegionServers method.
> Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] 
> for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336715592
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
+  LOG.info("Found old store file with no MOB_FILE_REFS: " + pp
++" - can not proceed until all old files will be 
MOB-compacted");
+  return;
+} else if (mobRefData == null) {
+  LOG.info("Skipping file without MOB references (can be 
bulkloaded file):"+ pp);
+  continue;
+}
+String[] mobs = new String(mobRefData).split(",");
+regionMobs.addAll(Arrays.asList(mobs));
+  }
+} catch (FileNotFoundException e) {
+  //TODO
+  LOG.warn(e.getMessage());
+  continue;
+}
+succeed = true;
+  }
+  // Add MOB refs for current region/family
+  allActiveMobFileName.addAll(regionMobs);
+} // END column families
+  }//END regions
+
+  // Now scan MOB directories and find MOB files with no references to them
+  long now = System.currentTimeMillis();
+  long minAgeToArchive = 
conf.getLong(MobConstants.MOB_MINIMUM_FILE_AGE_TO_ARCHIVE_KEY,
+  
MobConstants.DEFAULT_MOB_MINIMUM_FILE_AGE_TO_ARCHIVE);
+  for (ColumnFamilyDescriptor hcd: list) {
+  List toArchive = new ArrayList();
+  String family = hcd.getNameAsString();
+  Path dir = getMobFamilyPath(conf, table, family);
+  RemoteIterator rit = fs.listLocatedStatus(dir);
+  while(rit.hasNext()) {
+LocatedFileStatus lfs = rit.next();
+Path p = lfs.getPath();
+if (!allActiveMobFileName.contains(p.getName())) {
 
 Review comment:
   Right, but when we roll a version onto a cluster that used the original mob 
implementation it's very likely thy will have _del marker files. And those _del 
marker files are necessary for that implementation to operator correctly. I 

[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336715464
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobConstants.java
 ##
 @@ -55,33 +55,23 @@
   public static final long DEFAULT_MOB_CACHE_EVICT_PERIOD = 3600L;
 
   public final static String TEMP_DIR_NAME = ".tmp";
-  public final static String BULKLOAD_DIR_NAME = ".bulkload";
 
 Review comment:
   We're not supposed to make binary incompatible changes to IA.Public classes 
except after a deprecation cycle. for the master branch we'll either need to 
mark it deprecated with expected removal in 4.0, or we'll need to call it out 
in the release note as removed before then and why that was necessary. I'd just 
deprecate it.
   
   in any case we'll need to make sure it doesn't get removed in any backports.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955031#comment-16955031
 ] 

Hudson commented on HBASE-23170:


Results for branch branch-2
[build #2327 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2327/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2327//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2327//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2327//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
> -
>
> Key: HBASE-23170
> URL: https://issues.apache.org/jira/browse/HBASE-23170
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Admin#getRegionServers returns the server names.
> ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and 
> metrics, while the metrics are not useful for Admin#getRegionServers method.
> Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] 
> for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336715339
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCompactionChore.java
 ##
 @@ -0,0 +1,179 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.CompactionState;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableState;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class MobFileCompactionChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCompactionChore.class);
+  private final Configuration conf;
+  private final HMaster master;
+  private volatile boolean running = false;
+  private int regionBatchSize = 0;// not set - compact all
+
+  public MobFileCompactionChore(HMaster master) {
+super(master.getServerName() + "-MobFileCompactionChore", master, 
master.getConfiguration()
+  .getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD), master
+  .getConfiguration().getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD), TimeUnit.SECONDS);
+this.master = master;
+this.conf = master.getConfiguration();
+this.regionBatchSize =
+
master.getConfiguration().getInt(MobConstants.MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE,
+  MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE);
+
+  }
+
+  @Override
+  protected void chore() {
+
+boolean reported = false;
+
+try (Connection conn = ConnectionFactory.createConnection(conf);
+ Admin admin = conn.getAdmin(); ) {
+
+  if (running) {
+LOG.warn(getName() +" is running already, skipping this attempt.");
+return;
+  }
+  running = true;
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+if (!master.getTableStateManager().isTableState(htd.getTableName(),
+  TableState.State.ENABLED)) {
+  continue;
+}
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  if (hcd.isMobEnabled()) {
+if (!reported) {
+  master.reportMobCompactionStart(htd.getTableName());
+  reported = true;
+}
+LOG.info(" Major compacting "+ htd.getTableName() + " cf=" + 
hcd.getNameAsString());
+if (regionBatchSize == 
MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE) {
+  admin.majorCompact(htd.getTableName(), hcd.getName());
+} else {
+  performMajorCompactionInBatches(admin, htd, hcd);
+}
+  }
+}
+if (reported) {
+  master.reportMobCompactionEnd(htd.getTableName());
+  reported = false;
+}
+  }
+} catch (Exception e) {
+  LOG.error("Failed to compact", e);
+} finally {
+  running = false;
+}
+  }
+
+  private void performMajorCompactionInBatches(Admin admin, TableDescriptor 
htd,
+  ColumnFamilyDescriptor hcd) throws IOException {
+
+List regions = admin.getRegions(htd.getTableName());
+if (regions.size() <= this.regionBatchSize) {
+

[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336715094
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCompactionChore.java
 ##
 @@ -0,0 +1,179 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.CompactionState;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableState;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class MobFileCompactionChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCompactionChore.class);
+  private final Configuration conf;
+  private final HMaster master;
+  private volatile boolean running = false;
+  private int regionBatchSize = 0;// not set - compact all
+
+  public MobFileCompactionChore(HMaster master) {
+super(master.getServerName() + "-MobFileCompactionChore", master, 
master.getConfiguration()
+  .getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD), master
+  .getConfiguration().getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD), TimeUnit.SECONDS);
+this.master = master;
+this.conf = master.getConfiguration();
+this.regionBatchSize =
+
master.getConfiguration().getInt(MobConstants.MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE,
+  MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE);
+
+  }
+
+  @Override
+  protected void chore() {
+
+boolean reported = false;
+
+try (Connection conn = ConnectionFactory.createConnection(conf);
+ Admin admin = conn.getAdmin(); ) {
+
+  if (running) {
+LOG.warn(getName() +" is running already, skipping this attempt.");
+return;
+  }
+  running = true;
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+if (!master.getTableStateManager().isTableState(htd.getTableName(),
+  TableState.State.ENABLED)) {
+  continue;
+}
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  if (hcd.isMobEnabled()) {
+if (!reported) {
+  master.reportMobCompactionStart(htd.getTableName());
+  reported = true;
+}
+LOG.info(" Major compacting "+ htd.getTableName() + " cf=" + 
hcd.getNameAsString());
+if (regionBatchSize == 
MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE) {
+  admin.majorCompact(htd.getTableName(), hcd.getName());
+} else {
+  performMajorCompactionInBatches(admin, htd, hcd);
+}
+  }
+}
+if (reported) {
+  master.reportMobCompactionEnd(htd.getTableName());
+  reported = false;
+}
+  }
+} catch (Exception e) {
+  LOG.error("Failed to compact", e);
+} finally {
+  running = false;
+}
+  }
+
+  private void performMajorCompactionInBatches(Admin admin, TableDescriptor 
htd,
+  ColumnFamilyDescriptor hcd) throws IOException {
+
+List regions = admin.getRegions(htd.getTableName());
+if (regions.size() <= this.regionBatchSize) {
+

[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336705967
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -362,11 +508,375 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 abortWriter(mobFileWriter);
   }
 }
+// Commit or abort generational writers
+if (mobWriters != null) {
+  for (StoreFileWriter w: mobWriters.getOutputWriters()) {
+Long mobs = mobWriters.getMobCountForOutputWriter(w);
+if (mobs != null && mobs > 0) {
+  mobRefSet.get().add(w.getPath().getName());
+  w.appendMetadata(fd.maxSeqId, major, mobs);
+  w.close();
+  mobStore.commitFile(w.getPath(), path);
+} else {
+  abortWriter(w);
+}
+  }
+}
 mobStore.updateCellsCountCompactedFromMob(cellsCountCompactedFromMob);
 mobStore.updateCellsCountCompactedToMob(cellsCountCompactedToMob);
 mobStore.updateCellsSizeCompactedFromMob(cellsSizeCompactedFromMob);
 mobStore.updateCellsSizeCompactedToMob(cellsSizeCompactedToMob);
 progress.complete();
 return true;
   }
+
+  protected static String createKey(TableName tableName, String encodedName,
+  String columnFamilyName) {
+return tableName.getNameAsString()+ "_" + encodedName + "_"+ 
columnFamilyName;
+  }
+
+  @Override
+  protected List commitWriter(StoreFileWriter writer, FileDetails fd,
+  CompactionRequestImpl request) throws IOException {
+List newFiles = Lists.newArrayList(writer.getPath());
+writer.appendMetadata(fd.maxSeqId, request.isAllFiles(), 
request.getFiles());
+// Append MOB references
+Set refSet = mobRefSet.get();
+writer.appendMobMetadata(refSet);
+writer.close();
+return newFiles;
+  }
+
+  private List getReferencedMobFiles(Collection storeFiles) {
+Path mobDir = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
+Set mobSet = new HashSet();
+for (HStoreFile sf: storeFiles) {
+  byte[] value = sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+  if (value != null) {
+String s = new String(value);
+String[] all = s.split(",");
+Collections.addAll(mobSet, all);
+  }
+}
+List retList = new ArrayList();
+for(String name: mobSet) {
+  retList.add(new Path(mobDir, name));
+}
+return retList;
+  }
+}
+
+class FileSelection implements Comparable {
+
+  public final static String NULL_REGION = "";
+  private Path path;
+  private long earliestTs;
+  private Configuration conf;
+
+  public FileSelection(Path path, Configuration conf) throws IOException {
+this.path = path;
+this.conf = conf;
+readEarliestTimestamp();
+  }
+
+  public  String getEncodedRegionName() {
+String fileName = path.getName();
+String[] parts = fileName.split("_");
+if (parts.length == 2) {
+  return parts[1];
+} else {
+  return NULL_REGION;
+}
+  }
+
+  public Path getPath() {
+return path;
+  }
+
+  public long getEarliestTimestamp() {
+return earliestTs;
+  }
+
+  private void readEarliestTimestamp() throws IOException {
+FileSystem fs = path.getFileSystem(conf);
+HStoreFile sf = new HStoreFile(fs, path, conf, CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] tsData = sf.getMetadataValue(HStoreFile.EARLIEST_PUT_TS);
+if (tsData != null) {
+  this.earliestTs = Bytes.toLong(tsData);
+}
+sf.closeStoreFile(true);
+  }
+
+  @Override
+  public int compareTo(FileSelection o) {
+if (this.earliestTs > o.earliestTs) {
+  return +1;
+} else if (this.earliestTs == o.earliestTs) {
+  return 0;
+} else {
+  return -1;
+}
+  }
+
+}
+
+class Generations {
+
+  private List generations;
+  private Configuration conf;
+
+  private Generations(List gens, Configuration conf) {
+this.generations = gens;
+this.conf = conf;
+  }
+
+  List getCompactionSelections() throws IOException {
+int maxTotalFiles = 
this.conf.getInt(MobConstants.MOB_COMPACTION_MAX_TOTAL_FILES_KEY,
+ 
MobConstants.DEFAULT_MOB_COMPACTION_MAX_TOTAL_FILES);
+int currentTotal = 0;
+List list = new ArrayList();
+
+for (Generation g: generations) {
+  List sel = g.getCompactionSelections(conf);
+  int size = getSize(sel);
+  if ((currentTotal + size > maxTotalFiles) && currentTotal > 0) {
+break;
+  } else {
+currentTotal += size;
+list.addAll(sel);
+  }
+}
+return list;
+  }
+
+  private int getSize(List sel) {
+int size = 0;
+for(CompactionSelection cs: sel) {
+  size += cs.size();
+}
+return size;
+  }
+
+  static Generations build(List files, Configuration conf) throws 
IOException {
+Map > map = new 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336700677
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
+  LOG.info("Found old store file with no MOB_FILE_REFS: " + pp
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336700044
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/FaultyMobStoreCompactor.java
 ##
 @@ -0,0 +1,355 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.PrivateCellUtil;
+import org.apache.hadoop.hbase.io.hfile.CorruptHFileException;
+import org.apache.hadoop.hbase.regionserver.CellSink;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.KeyValueScanner;
+import org.apache.hadoop.hbase.regionserver.ScannerContext;
+import org.apache.hadoop.hbase.regionserver.ShipperListener;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControlUtil;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputController;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] karthikhw opened a new pull request #733: HBASE-23191 EOFE log spam

2019-10-18 Thread GitBox
karthikhw opened a new pull request #733: HBASE-23191 EOFE log spam
URL: https://github.com/apache/hbase/pull/733
 
 
   If no new active writes in WAL edit, then WALEntryStream#hasNext -> 
ReaderBase -> ProtobufLogReader#readNext will reach end of file. It would be a 
good idea for changing the log level from INFO to DEBUG. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23191) Log spams on Replication

2019-10-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23191:
---
Issue Type: Improvement  (was: Bug)

> Log spams on Replication
> 
>
> Key: HBASE-23191
> URL: https://issues.apache.org/jira/browse/HBASE-23191
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>
> If no new active writes in WAL edit, then *WALEntryStream#hasNext -> 
> ReaderBase -> ProtobufLogReader#readNext* will reach end of file. It would be 
> a good idea for changing the log level from INFO to DEBUG. 
>  
> {code:java}
> 2019-10-18 22:25:03,572 INFO  
> [RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
>  wal.ProtobufLogReader: Reached the end of file at position 83
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23191) Log spams on Replication

2019-10-18 Thread Karthik Palanisamy (Jira)
Karthik Palanisamy created HBASE-23191:
--

 Summary: Log spams on Replication
 Key: HBASE-23191
 URL: https://issues.apache.org/jira/browse/HBASE-23191
 Project: HBase
  Issue Type: Bug
  Components: Replication
Affects Versions: 3.0.0
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


If no new active writes in WAL edit, then *WALEntryStream#hasNext -> ReaderBase 
-> ProtobufLogReader#readNext* will reach end of file. It would be a good idea 
for changing the log level from INFO to DEBUG. 

 
{code:java}
2019-10-18 22:25:03,572 INFO  
[RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
 wal.ProtobufLogReader: Reached the end of file at position 83
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23191) Log spams on Replication

2019-10-18 Thread Karthik Palanisamy (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HBASE-23191:
---
Priority: Trivial  (was: Minor)

> Log spams on Replication
> 
>
> Key: HBASE-23191
> URL: https://issues.apache.org/jira/browse/HBASE-23191
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Trivial
>
> If no new active writes in WAL edit, then *WALEntryStream#hasNext -> 
> ReaderBase -> ProtobufLogReader#readNext* will reach end of file. It would be 
> a good idea for changing the log level from INFO to DEBUG. 
>  
> {code:java}
> 2019-10-18 22:25:03,572 INFO  
> [RS_REFRESH_PEER-regionserver/apache303:16020-0.replicationSource,p1hdp314.replicationSource.wal-reader.apache303.openstacklocal%2C16020%2C1571383146790,p1hdp314]
>  wal.ProtobufLogReader: Reached the end of file at position 83
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336655778
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -362,11 +508,375 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 abortWriter(mobFileWriter);
   }
 }
+// Commit or abort generational writers
+if (mobWriters != null) {
+  for (StoreFileWriter w: mobWriters.getOutputWriters()) {
+Long mobs = mobWriters.getMobCountForOutputWriter(w);
+if (mobs != null && mobs > 0) {
+  mobRefSet.get().add(w.getPath().getName());
+  w.appendMetadata(fd.maxSeqId, major, mobs);
+  w.close();
+  mobStore.commitFile(w.getPath(), path);
+} else {
+  abortWriter(w);
+}
+  }
+}
 mobStore.updateCellsCountCompactedFromMob(cellsCountCompactedFromMob);
 mobStore.updateCellsCountCompactedToMob(cellsCountCompactedToMob);
 mobStore.updateCellsSizeCompactedFromMob(cellsSizeCompactedFromMob);
 mobStore.updateCellsSizeCompactedToMob(cellsSizeCompactedToMob);
 progress.complete();
 return true;
   }
+
+  protected static String createKey(TableName tableName, String encodedName,
+  String columnFamilyName) {
+return tableName.getNameAsString()+ "_" + encodedName + "_"+ 
columnFamilyName;
+  }
+
+  @Override
+  protected List commitWriter(StoreFileWriter writer, FileDetails fd,
+  CompactionRequestImpl request) throws IOException {
+List newFiles = Lists.newArrayList(writer.getPath());
+writer.appendMetadata(fd.maxSeqId, request.isAllFiles(), 
request.getFiles());
+// Append MOB references
+Set refSet = mobRefSet.get();
+writer.appendMobMetadata(refSet);
+writer.close();
+return newFiles;
+  }
+
+  private List getReferencedMobFiles(Collection storeFiles) {
+Path mobDir = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
+Set mobSet = new HashSet();
+for (HStoreFile sf: storeFiles) {
+  byte[] value = sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+  if (value != null) {
+String s = new String(value);
+String[] all = s.split(",");
+Collections.addAll(mobSet, all);
+  }
+}
+List retList = new ArrayList();
+for(String name: mobSet) {
+  retList.add(new Path(mobDir, name));
+}
+return retList;
+  }
+}
+
+class FileSelection implements Comparable {
+
+  public final static String NULL_REGION = "";
+  private Path path;
+  private long earliestTs;
+  private Configuration conf;
+
+  public FileSelection(Path path, Configuration conf) throws IOException {
+this.path = path;
+this.conf = conf;
+readEarliestTimestamp();
+  }
+
+  public  String getEncodedRegionName() {
+String fileName = path.getName();
+String[] parts = fileName.split("_");
+if (parts.length == 2) {
+  return parts[1];
+} else {
+  return NULL_REGION;
+}
+  }
+
+  public Path getPath() {
+return path;
+  }
+
+  public long getEarliestTimestamp() {
+return earliestTs;
+  }
+
+  private void readEarliestTimestamp() throws IOException {
+FileSystem fs = path.getFileSystem(conf);
+HStoreFile sf = new HStoreFile(fs, path, conf, CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] tsData = sf.getMetadataValue(HStoreFile.EARLIEST_PUT_TS);
+if (tsData != null) {
+  this.earliestTs = Bytes.toLong(tsData);
+}
+sf.closeStoreFile(true);
+  }
+
+  @Override
+  public int compareTo(FileSelection o) {
+if (this.earliestTs > o.earliestTs) {
+  return +1;
+} else if (this.earliestTs == o.earliestTs) {
+  return 0;
+} else {
+  return -1;
+}
+  }
+
+}
+
+class Generations {
+
+  private List generations;
+  private Configuration conf;
+
+  private Generations(List gens, Configuration conf) {
+this.generations = gens;
+this.conf = conf;
+  }
+
+  List getCompactionSelections() throws IOException {
+int maxTotalFiles = 
this.conf.getInt(MobConstants.MOB_COMPACTION_MAX_TOTAL_FILES_KEY,
+ 
MobConstants.DEFAULT_MOB_COMPACTION_MAX_TOTAL_FILES);
+int currentTotal = 0;
+List list = new ArrayList();
+
+for (Generation g: generations) {
+  List sel = g.getCompactionSelections(conf);
+  int size = getSize(sel);
+  if ((currentTotal + size > maxTotalFiles) && currentTotal > 0) {
+break;
+  } else {
+currentTotal += size;
+list.addAll(sel);
+  }
+}
+return list;
+  }
+
+  private int getSize(List sel) {
+int size = 0;
+for(CompactionSelection cs: sel) {
+  size += cs.size();
+}
+return size;
+  }
+
+  static Generations build(List files, Configuration conf) throws 
IOException {
+Map > map = 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336651619
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -362,11 +508,375 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 abortWriter(mobFileWriter);
   }
 }
+// Commit or abort generational writers
+if (mobWriters != null) {
+  for (StoreFileWriter w: mobWriters.getOutputWriters()) {
+Long mobs = mobWriters.getMobCountForOutputWriter(w);
+if (mobs != null && mobs > 0) {
+  mobRefSet.get().add(w.getPath().getName());
+  w.appendMetadata(fd.maxSeqId, major, mobs);
+  w.close();
+  mobStore.commitFile(w.getPath(), path);
+} else {
+  abortWriter(w);
+}
+  }
+}
 mobStore.updateCellsCountCompactedFromMob(cellsCountCompactedFromMob);
 mobStore.updateCellsCountCompactedToMob(cellsCountCompactedToMob);
 mobStore.updateCellsSizeCompactedFromMob(cellsSizeCompactedFromMob);
 mobStore.updateCellsSizeCompactedToMob(cellsSizeCompactedToMob);
 progress.complete();
 return true;
   }
+
+  protected static String createKey(TableName tableName, String encodedName,
+  String columnFamilyName) {
+return tableName.getNameAsString()+ "_" + encodedName + "_"+ 
columnFamilyName;
+  }
+
+  @Override
+  protected List commitWriter(StoreFileWriter writer, FileDetails fd,
+  CompactionRequestImpl request) throws IOException {
+List newFiles = Lists.newArrayList(writer.getPath());
+writer.appendMetadata(fd.maxSeqId, request.isAllFiles(), 
request.getFiles());
+// Append MOB references
+Set refSet = mobRefSet.get();
+writer.appendMobMetadata(refSet);
+writer.close();
+return newFiles;
+  }
+
+  private List getReferencedMobFiles(Collection storeFiles) {
+Path mobDir = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
+Set mobSet = new HashSet();
+for (HStoreFile sf: storeFiles) {
+  byte[] value = sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+  if (value != null) {
+String s = new String(value);
+String[] all = s.split(",");
+Collections.addAll(mobSet, all);
+  }
+}
+List retList = new ArrayList();
+for(String name: mobSet) {
+  retList.add(new Path(mobDir, name));
+}
+return retList;
+  }
+}
+
+class FileSelection implements Comparable {
+
+  public final static String NULL_REGION = "";
+  private Path path;
+  private long earliestTs;
+  private Configuration conf;
+
+  public FileSelection(Path path, Configuration conf) throws IOException {
+this.path = path;
+this.conf = conf;
+readEarliestTimestamp();
+  }
+
+  public  String getEncodedRegionName() {
+String fileName = path.getName();
+String[] parts = fileName.split("_");
+if (parts.length == 2) {
+  return parts[1];
+} else {
+  return NULL_REGION;
+}
+  }
+
+  public Path getPath() {
+return path;
+  }
+
+  public long getEarliestTimestamp() {
+return earliestTs;
+  }
+
+  private void readEarliestTimestamp() throws IOException {
+FileSystem fs = path.getFileSystem(conf);
+HStoreFile sf = new HStoreFile(fs, path, conf, CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] tsData = sf.getMetadataValue(HStoreFile.EARLIEST_PUT_TS);
+if (tsData != null) {
+  this.earliestTs = Bytes.toLong(tsData);
+}
+sf.closeStoreFile(true);
+  }
+
+  @Override
+  public int compareTo(FileSelection o) {
+if (this.earliestTs > o.earliestTs) {
+  return +1;
+} else if (this.earliestTs == o.earliestTs) {
+  return 0;
+} else {
+  return -1;
+}
+  }
+
+}
+
+class Generations {
+
+  private List generations;
+  private Configuration conf;
+
+  private Generations(List gens, Configuration conf) {
+this.generations = gens;
+this.conf = conf;
+  }
+
+  List getCompactionSelections() throws IOException {
+int maxTotalFiles = 
this.conf.getInt(MobConstants.MOB_COMPACTION_MAX_TOTAL_FILES_KEY,
+ 
MobConstants.DEFAULT_MOB_COMPACTION_MAX_TOTAL_FILES);
+int currentTotal = 0;
+List list = new ArrayList();
+
+for (Generation g: generations) {
+  List sel = g.getCompactionSelections(conf);
+  int size = getSize(sel);
+  if ((currentTotal + size > maxTotalFiles) && currentTotal > 0) {
+break;
+  } else {
+currentTotal += size;
+list.addAll(sel);
+  }
+}
+return list;
+  }
+
+  private int getSize(List sel) {
+int size = 0;
+for(CompactionSelection cs: sel) {
+  size += cs.size();
+}
+return size;
+  }
+
+  static Generations build(List files, Configuration conf) throws 
IOException {
+Map > map = 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336641661
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestMobCompaction.java
 ##
 @@ -0,0 +1,344 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.KeepDeletedCells;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.BufferedMutator;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner;
+import org.apache.hadoop.hbase.testclassification.IntegrationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+/**
+Reproduction for MOB data loss
+
+ 1. Settings: Region Size 200 MB,  Flush threshold 800 KB.
+ 2. Insert 10 Million records
+ 3. MOB Compaction and Archiver
+  a) Trigger MOB Compaction (every 2 minutes)
+  b) Trigger major compaction (every 2 minutes)
+  c) Trigger archive cleaner (every 3 minutes)
+ 4. Validate MOB data after complete data load.
+
+ */
+@Category(IntegrationTests.class)
+public class TestMobCompaction {
 
 Review comment:
   Created:
   https://issues.apache.org/jira/browse/HBASE-23190


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (HBASE-23188) MobFileCleanerChore test case

2019-10-18 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov reassigned HBASE-23188:
-

Assignee: (was: Vladimir Rodionov)

> MobFileCleanerChore test case
> -
>
> Key: HBASE-23188
> URL: https://issues.apache.org/jira/browse/HBASE-23188
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Priority: Major
>
> The test should do the following:
> a) properly remove obsolete files as expected
> b) dot not remove mob files from prior to the reference accounting added in 
> this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23190) Convert MobCompactionTest into integration test

2019-10-18 Thread Vladimir Rodionov (Jira)
Vladimir Rodionov created HBASE-23190:
-

 Summary: Convert MobCompactionTest into integration test
 Key: HBASE-23190
 URL: https://issues.apache.org/jira/browse/HBASE-23190
 Project: HBase
  Issue Type: Sub-task
Reporter: Vladimir Rodionov






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336624502
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -362,11 +508,375 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 abortWriter(mobFileWriter);
   }
 }
+// Commit or abort generational writers
+if (mobWriters != null) {
+  for (StoreFileWriter w: mobWriters.getOutputWriters()) {
+Long mobs = mobWriters.getMobCountForOutputWriter(w);
+if (mobs != null && mobs > 0) {
+  mobRefSet.get().add(w.getPath().getName());
+  w.appendMetadata(fd.maxSeqId, major, mobs);
+  w.close();
+  mobStore.commitFile(w.getPath(), path);
+} else {
+  abortWriter(w);
+}
+  }
+}
 mobStore.updateCellsCountCompactedFromMob(cellsCountCompactedFromMob);
 mobStore.updateCellsCountCompactedToMob(cellsCountCompactedToMob);
 mobStore.updateCellsSizeCompactedFromMob(cellsSizeCompactedFromMob);
 mobStore.updateCellsSizeCompactedToMob(cellsSizeCompactedToMob);
 progress.complete();
 return true;
   }
+
+  protected static String createKey(TableName tableName, String encodedName,
+  String columnFamilyName) {
+return tableName.getNameAsString()+ "_" + encodedName + "_"+ 
columnFamilyName;
+  }
+
+  @Override
+  protected List commitWriter(StoreFileWriter writer, FileDetails fd,
+  CompactionRequestImpl request) throws IOException {
+List newFiles = Lists.newArrayList(writer.getPath());
+writer.appendMetadata(fd.maxSeqId, request.isAllFiles(), 
request.getFiles());
+// Append MOB references
+Set refSet = mobRefSet.get();
+writer.appendMobMetadata(refSet);
+writer.close();
+return newFiles;
+  }
+
+  private List getReferencedMobFiles(Collection storeFiles) {
+Path mobDir = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
+Set mobSet = new HashSet();
+for (HStoreFile sf: storeFiles) {
+  byte[] value = sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+  if (value != null) {
+String s = new String(value);
+String[] all = s.split(",");
+Collections.addAll(mobSet, all);
+  }
+}
+List retList = new ArrayList();
+for(String name: mobSet) {
+  retList.add(new Path(mobDir, name));
+}
+return retList;
+  }
+}
+
+class FileSelection implements Comparable {
+
+  public final static String NULL_REGION = "";
+  private Path path;
+  private long earliestTs;
+  private Configuration conf;
+
+  public FileSelection(Path path, Configuration conf) throws IOException {
+this.path = path;
+this.conf = conf;
+readEarliestTimestamp();
+  }
+
+  public  String getEncodedRegionName() {
+String fileName = path.getName();
+String[] parts = fileName.split("_");
+if (parts.length == 2) {
+  return parts[1];
+} else {
+  return NULL_REGION;
+}
+  }
+
+  public Path getPath() {
+return path;
+  }
+
+  public long getEarliestTimestamp() {
+return earliestTs;
+  }
+
+  private void readEarliestTimestamp() throws IOException {
+FileSystem fs = path.getFileSystem(conf);
+HStoreFile sf = new HStoreFile(fs, path, conf, CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] tsData = sf.getMetadataValue(HStoreFile.EARLIEST_PUT_TS);
+if (tsData != null) {
+  this.earliestTs = Bytes.toLong(tsData);
+}
+sf.closeStoreFile(true);
+  }
+
+  @Override
+  public int compareTo(FileSelection o) {
+if (this.earliestTs > o.earliestTs) {
+  return +1;
+} else if (this.earliestTs == o.earliestTs) {
+  return 0;
+} else {
+  return -1;
+}
+  }
+
+}
+
+class Generations {
 
 Review comment:
   Created:
https://issues.apache.org/jira/browse/HBASE-23189


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23189) Finalize generational compaction

2019-10-18 Thread Vladimir Rodionov (Jira)
Vladimir Rodionov created HBASE-23189:
-

 Summary: Finalize generational compaction
 Key: HBASE-23189
 URL: https://issues.apache.org/jira/browse/HBASE-23189
 Project: HBase
  Issue Type: Sub-task
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


+corresponding test cases

The current code for generational compaction has not been tested and verified 
yet. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336622384
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestMobFileName.java
 ##
 @@ -47,6 +47,7 @@
   private Date date;
   private String dateStr;
   private byte[] startKey;
+  private String regionName = "region";
 
 
 Review comment:
   I did not find a single usage of this class in the code which would 
benefited from this test case. Parsing file name into instance of MobFileName 
is used only in two test cases.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-19663) javadoc creation needs jsr305

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954885#comment-16954885
 ] 

Hudson commented on HBASE-19663:


Results for branch branch-1.4
[build #1060 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> javadoc creation needs jsr305
> -
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11, 1.5.1
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22991) Release 1.4.11

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954886#comment-16954886
 ] 

Hudson commented on HBASE-22991:


Results for branch branch-1.4
[build #1060 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1060//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Release 1.4.11
> --
>
> Key: HBASE-22991
> URL: https://issues.apache.org/jira/browse/HBASE-22991
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11
>
> Attachments: Flaky_20Test_20Report.zip
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23188) MobFileCleanerChore test case

2019-10-18 Thread Vladimir Rodionov (Jira)
Vladimir Rodionov created HBASE-23188:
-

 Summary: MobFileCleanerChore test case
 Key: HBASE-23188
 URL: https://issues.apache.org/jira/browse/HBASE-23188
 Project: HBase
  Issue Type: Sub-task
Reporter: Vladimir Rodionov
Assignee: Vladimir Rodionov


The test should do the following:
a) properly remove obsolete files as expected
b) dot not remove mob files from prior to the reference accounting added in 
this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336617595
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java
 ##
 @@ -39,27 +39,33 @@
  * mob files.
  */
 @InterfaceAudience.Private
-public class ExpiredMobFileCleanerChore extends ScheduledChore {
+public class MobFileCleanerChore extends ScheduledChore {
 
 Review comment:
   Created:
   https://issues.apache.org/jira/browse/HBASE-23188


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23172) HBase Canary region success count metrics reflect column family successes, not region successes

2019-10-18 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-23172:
-
Attachment: HBASE-23172.master.000.patch

> HBase Canary region success count metrics reflect column family successes, 
> not region successes
> ---
>
> Key: HBASE-23172
> URL: https://issues.apache.org/jira/browse/HBASE-23172
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
> Attachments: HBASE-23172.branch-1.000.patch, 
> HBASE-23172.branch-2.000.patch, HBASE-23172.master.000.patch, 
> HBASE-23172.master.000.patch
>
>
> HBase Canary reads once per column family per region. The current "region 
> success count" should actually be "column family success count," which means 
> we need another metric that actually reflects region success count. 
> Additionally, the region read and write latencies only store the latencies of 
> the last column family of the region read. Instead of a map of regions to a 
> single latency value and success value, we should map each region to a list 
> of such values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23176) delete_all_snapshot does not work with regex

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954839#comment-16954839
 ] 

Hudson commented on HBASE-23176:


Results for branch master
[build #1509 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1509/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1509//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1509//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1509//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> delete_all_snapshot does not work with regex
> 
>
> Key: HBASE-23176
> URL: https://issues.apache.org/jira/browse/HBASE-23176
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
>
> Delete_all_snapshot.rb is using deprecated method 
> SnapshotDescription#getTable but this method is already removed in 3.0.x.
> {code:java}
> hbase(main):022:0>delete_all_snapshot("t10.*")
> SNAPSHOT TABLE + CREATION 
> TIME ERROR: undefined method `getTable' for 
> #
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954840#comment-16954840
 ] 

Hudson commented on HBASE-23170:


Results for branch master
[build #1509 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1509/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1509//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1509//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1509//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
> -
>
> Key: HBASE-23170
> URL: https://issues.apache.org/jira/browse/HBASE-23170
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Admin#getRegionServers returns the server names.
> ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and 
> metrics, while the metrics are not useful for Admin#getRegionServers method.
> Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] 
> for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-18 Thread GitBox
Apache-HBase commented on issue #732: HBASE-23187 Update parent region state to 
SPLIT in meta
URL: https://github.com/apache/hbase/pull/732#issuecomment-543834891
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   3m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 33s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 50s |  master passed  |
   | :green_heart: |  compile  |   1m 24s |  master passed  |
   | :green_heart: |  checkstyle  |   2m  1s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 58s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 57s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 27s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   5m 35s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   5m 28s |  the patch passed  |
   | :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   1m 28s |  hbase-server: The patch 
generated 20 new + 0 unchanged - 0 fixed = 20 total (was 0)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 59s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m 25s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 57s |  the patch passed  |
   | :green_heart: |  findbugs  |   6m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   1m 54s |  hbase-client in the patch passed.  |
   | :broken_heart: |  unit  | 247m 14s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 50s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 320m 22s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.security.access.TestSnapshotScannerHDFSAclController |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-732/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/732 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux a659bf23e03e 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-732/out/precommit/personality/provided.sh
 |
   | git revision | master / 946f1e9e25 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-732/1/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-732/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-732/1/testReport/
 |
   | Max. process+thread count | 4975 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-732/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #730: HBASE-23184 The HeapAllocation in WebUI is not accurate

2019-10-18 Thread GitBox
Apache-HBase commented on issue #730: HBASE-23184 The HeapAllocation in WebUI 
is not accurate
URL: https://github.com/apache/hbase/pull/730#issuecomment-543786982
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m  8s |  master passed  |
   | :green_heart: |  compile  |   1m 20s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 42s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 33s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m  1s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  4s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 53s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   4m 57s |  the patch passed  |
   | :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | :green_heart: |  javac  |   1m 21s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 42s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 40s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 46s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 57s |  the patch passed  |
   | :green_heart: |  findbugs  |   5m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   3m  6s |  hbase-common in the patch passed.  |
   | :broken_heart: |  unit  | 265m 32s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   1m  6s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 331m 26s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.client.TestFromClientSide |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/730 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 048f3e0635c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-730/out/precommit/personality/provided.sh
 |
   | git revision | master / 946f1e9e25 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/2/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/2/testReport/
 |
   | Max. process+thread count | 5047 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/2/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] joshelser commented on issue #661: HBASE-15519 Add per-user metrics with lossy counting

2019-10-18 Thread GitBox
joshelser commented on issue #661: HBASE-15519 Add per-user metrics with lossy 
counting
URL: https://github.com/apache/hbase/pull/661#issuecomment-543765757
 
 
   Just chatted with Busbey offline.
   
   @ankitsinghal is it possible to create a simple configuration property that 
can disable user-metrics? E.g. adding to `MetricsRegionServer`, wrapping the 
`MetricsUserAggregate` in an `Optional`.
   
   The thought is, we can leave this on by default, but, if in the course of 
some user running into performance, issues, we can easily disable just the user 
metrics.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (HBASE-23136) PartionedMobFileCompactor bulkloaded files shouldn't get replicated (addressing buklload replication related issue raised in HBASE-22380)

2019-10-18 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954643#comment-16954643
 ] 

Wellington Chevreuil edited comment on HBASE-23136 at 10/18/19 2:08 PM:


Merged latest PR into master. Working on branch-2 backport.


was (Author: wchevreuil):
Merged latest commit into master. Working on branch-2 backport.

> PartionedMobFileCompactor bulkloaded files shouldn't get replicated 
> (addressing buklload replication related issue raised in HBASE-22380)
> -
>
> Key: HBASE-23136
> URL: https://issues.apache.org/jira/browse/HBASE-23136
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.2.2, 2.1.8
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Critical
>  Labels: bulkload, mob
> Fix For: 3.0.0
>
>
> Following bulkload replication fixes started in HBASE-22380, this addresses 
> [~javaman_chen] observation regarding *PartitionedMobCompactor* and bulk 
> loaded. As noted by [~javaman_chen], *PartitionedMobCompactor* uses bulkload 
> feature to update resulting hfile into region hstore. This file, however, 
> shouldn't get replicated in any condition. This PR adds required changes and 
> extra test for this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23136) PartionedMobFileCompactor bulkloaded files shouldn't get replicated (addressing buklload replication related issue raised in HBASE-22380)

2019-10-18 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954643#comment-16954643
 ] 

Wellington Chevreuil commented on HBASE-23136:
--

Merged latest commit into master. Working on branch-2 backport.

> PartionedMobFileCompactor bulkloaded files shouldn't get replicated 
> (addressing buklload replication related issue raised in HBASE-22380)
> -
>
> Key: HBASE-23136
> URL: https://issues.apache.org/jira/browse/HBASE-23136
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.2.2, 2.1.8
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Critical
>  Labels: bulkload, mob
> Fix For: 3.0.0
>
>
> Following bulkload replication fixes started in HBASE-22380, this addresses 
> [~javaman_chen] observation regarding *PartitionedMobCompactor* and bulk 
> loaded. As noted by [~javaman_chen], *PartitionedMobCompactor* uses bulkload 
> feature to update resulting hfile into region hstore. This file, however, 
> shouldn't get replicated in any condition. This PR adds required changes and 
> extra test for this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil merged pull request #712: HBASE-23136

2019-10-18 Thread GitBox
wchevreuil merged pull request #712: HBASE-23136
URL: https://github.com/apache/hbase/pull/712
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-18 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-23175:
---
Component/s: security

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: security, spark
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-18 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-23175:
---
Component/s: (was: hbase-connectors)
 spark

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-18 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954609#comment-16954609
 ] 

Josh Elser commented on HBASE-23175:


bq. Will raise an amendment ticket for SPARK-26432 separately as the new change 
is relying on the API which is deprecated recently.

The change looks straightforward enough and reasonable for us to make to 
improve our spark compatibility.

Thinking forward to release notes, do you have an idea of what versions of 
Spark 2 would be incompatible with HBase 2.x? Is it all Spark2.x against all 
currently-release HBase 2.x?

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-connectors
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23175) Yarn unable to acquire delegation token for HBase Spark jobs

2019-10-18 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-23175:
---
Status: Patch Available  (was: Open)

> Yarn unable to acquire delegation token for HBase Spark jobs
> 
>
> Key: HBASE-23175
> URL: https://issues.apache.org/jira/browse/HBASE-23175
> Project: HBase
>  Issue Type: Bug
>  Components: hbase-connectors
>Affects Versions: 2.0.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: HBASE-23175.master.001.patch
>
>
> Spark rely on the TokenUtil.obtainToken(conf) API which is removed in 
> HBase-2.0, though it has been fixed in SPARK-26432 to use the new API but 
> planned for Spark-3.0, hence we need the fix in HBase until they release it 
> and we upgrade it
> {code}
> 18/03/20 20:39:07 ERROR ApplicationMaster: User class threw exception: 
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> org.apache.hadoop.hbase.HBaseIOException: 
> com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:360)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:346)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:121)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:118)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:118)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:272)
> at 
> org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:533)
> at 
> org.apache.hadoop.hbase.spark.HBaseContext.(HBaseContext.scala:73)
> at 
> org.apache.hadoop.hbase.spark.JavaHBaseContext.(JavaHBaseContext.scala:46)
> at 
> org.apache.hadoop.hbase.spark.example.hbasecontext.JavaHBaseBulkDeleteExample.main(JavaHBaseBulkDeleteExample.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:706)
> Caused by: com.google.protobuf.ServiceException: Error calling method 
> hbase.pb.AuthenticationService.GetAuthenticationToken
> at 
> org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:71)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512)
> at 
> org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:81)
> ... 17 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23136) PartionedMobFileCompactor bulkloaded files shouldn't get replicated (addressing buklload replication related issue raised in HBASE-22380)

2019-10-18 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954557#comment-16954557
 ] 

HBase QA commented on HBASE-23136:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
34s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m  
6s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
43s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
2m  1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}174m  
8s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[GitHub] [hbase] Apache-HBase commented on issue #712: HBASE-23136

2019-10-18 Thread GitBox
Apache-HBase commented on issue #712: HBASE-23136
URL: https://github.com/apache/hbase/pull/712#issuecomment-543719828
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :blue_heart: |  prototool  |   0m  0s |  prototool was not available.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 3 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 13s |  master passed  |
   | :green_heart: |  compile  |   2m  1s |  master passed  |
   | :green_heart: |  checkstyle  |   2m 15s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 17s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  6s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   7m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   4m 56s |  the patch passed  |
   | :green_heart: |  compile  |   2m  4s |  the patch passed  |
   | :green_heart: |  cc  |   2m  4s |  the patch passed  |
   | :green_heart: |  javac  |   2m  4s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 26s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 43s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m 14s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  hbaseprotoc  |   2m  1s |  the patch passed  |
   | :green_heart: |  javadoc  |   1m 14s |  the patch passed  |
   | :green_heart: |  findbugs  |   8m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   0m 42s |  hbase-protocol-shaded in the patch 
passed.  |
   | :green_heart: |  unit  |   1m 53s |  hbase-client in the patch passed.  |
   | :green_heart: |  unit  | 174m  8s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   1m 21s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 251m 55s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-712/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/712 |
   | JIRA Issue | HBASE-23136 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux 7fff3e41f712 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-712/out/precommit/personality/provided.sh
 |
   | git revision | master / 946f1e9e25 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-712/3/testReport/
 |
   | Max. process+thread count | 4746 (vs. ulimit of 1) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-712/3/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (HBASE-22881) Fix non-daemon threads in hbase server implementation

2019-10-18 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954551#comment-16954551
 ] 

Xiaolin Ha edited comment on HBASE-22881 at 10/18/19 12:27 PM:
---

Above is from dump of our testing cluster.

And in my UT which I worked recently, I also found not daemon threads in 
master. But I could not find out where the problem is now.

I will dig more.


was (Author: xiaolin ha):
Above is from dump of our test cluster.

And in my UT which I worked recently, I also found not daemon threads in 
master. But I could not find out where the problem is now.

I will dig more.

> Fix non-daemon threads in hbase server implementation
> -
>
> Key: HBASE-22881
> URL: https://issues.apache.org/jira/browse/HBASE-22881
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
>
> "pool-8-thread-3" #7252 prio=5 os_prio=0 tid=0x7f91040044c0 nid=0xd71e 
> waiting on condition [0x7f8f4d209000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-2" #7251 prio=5 os_prio=0 tid=0x7f910c010be0 nid=0xd71d 
> waiting on condition [0x7f8f4daab000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-1" #7250 prio=5 os_prio=0 tid=0x7f9119d0 nid=0xd71c 
> waiting on condition [0x7f8f4da6a000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-5-thread-3" #7248 prio=5 os_prio=0 tid=0x7f9238005ad0 nid=0xd71a 
> waiting on condition [0x7f8f4cb65000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0ec51e0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> 

[jira] [Commented] (HBASE-22881) Fix non-daemon threads in hbase server implementation

2019-10-18 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954551#comment-16954551
 ] 

Xiaolin Ha commented on HBASE-22881:


Above is from dump of our test cluster.

And in my UT which I worked recently, I also found not daemon threads in 
master. But I could not find out where the problem is now.

I will dig more.

> Fix non-daemon threads in hbase server implementation
> -
>
> Key: HBASE-22881
> URL: https://issues.apache.org/jira/browse/HBASE-22881
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
>
> "pool-8-thread-3" #7252 prio=5 os_prio=0 tid=0x7f91040044c0 nid=0xd71e 
> waiting on condition [0x7f8f4d209000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-2" #7251 prio=5 os_prio=0 tid=0x7f910c010be0 nid=0xd71d 
> waiting on condition [0x7f8f4daab000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-1" #7250 prio=5 os_prio=0 tid=0x7f9119d0 nid=0xd71c 
> waiting on condition [0x7f8f4da6a000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-5-thread-3" #7248 prio=5 os_prio=0 tid=0x7f9238005ad0 nid=0xd71a 
> waiting on condition [0x7f8f4cb65000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0ec51e0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--

[jira] [Updated] (HBASE-23186) Close dfs output stream in fsck threads when master exit

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-23186:
---
Summary: Close dfs output stream in fsck threads when master exit  (was: 
Close dfs output stream in Fsck threads when master exit)

> Close dfs output stream in fsck threads when master exit
> 
>
> Key: HBASE-23186
> URL: https://issues.apache.org/jira/browse/HBASE-23186
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> HBASE-21072 imported to use HBaseFsck as default in hbase2.
> {code:java}
> if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
>   HBaseFsck.checkAndMarkRunningHbck(this.conf,
>   HBaseFsck.createLockRetryCounterFactory(this.conf).create());
> }{code}
>  
> We should close the dfs output stream when master abort/stop.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-23186) Close dfs output stream in Fsck threads when master exit

2019-10-18 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954433#comment-16954433
 ] 

Xiaolin Ha edited comment on HBASE-23186 at 10/18/19 12:22 PM:
---

ZK session expired,and master aborted.
{quote}2019-10-16,23:49:41,611 INFO 
[main-SendThread(tj1-hadoop-staging-ct05.kscn:11000)] 
org.apache.zookeeper.ClientCnxn: Session establishment complete on server 
tj1-hadoop-staging-ct05.kscn/10.38.166.12:11000, sessionid = 0x46cfbd296b7e62b, 
negotiated timeout = 2
 2019-10-17,00:15:26,253 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
 2019-10-17,00:15:37,357 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
 2019-10-17,00:15:48,168 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
 2019-10-17,00:15:50,285 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
 2019-10-17,00:15:57,972 INFO 
[main-SendThread(tj1-hadoop-staging-ct05.kscn:11000)] 
org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from 
server in 24963ms for sessionid 0x46cfbd296b7e62b, closing socket connection 
and attempting reconnect
 2019-10-17,00:15:59,505 WARN [master/tj1-hadoop-staging-ct02:22500] 
org.apache.hadoop.hbase.util.Sleeper: We slept 24551ms instead of 3000ms, this 
is likely due to a long garbage collecting pause and it's usually bad, see 
[http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired]
 2019-10-17,00:16:01,733 INFO 
[master/tj1-hadoop-staging-ct02:22500:becomeActiveMaster-SendThread(tj1-hadoop-staging-ct02.kscn:11000)]
 org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from 
server in 25436ms for sessionid 0x26cfbd28d32ffc0, closing socket connection 
and attempting reconnect
 2019-10-17,00:16:21,558 ERROR [main-EventThread] 
org.apache.hadoop.hbase.master.HMaster: Master server abort: loaded 
coprocessors are: [org.apache.hadoop.hbase.security.access.AccessController, 
org.apache.hadoop.hbase.security.access.SnapshotScannerHDFSAclController, 
org.apache.hadoop.hbase.quotas.MasterQuotasObserver, 
org.apache.hadoop.hbase.master.ThemisMasterObserver]
 2019-10-17,00:16:21,595 INFO 
[ReadOnlyZKClient-tjwq02tst.zk.hadoop.srv:11000@0x62a10a8c-SendThread(tj1-hadoop-staging-ct04.kscn:11000)]
 org.apache.zookeeper.ClientCnxn: Session establishment complete on server 
tj1-hadoop-staging-ct04.kscn/10.38.162.36:11000, sessionid = 0x36cfbd28d810e26, 
negotiated timeout = 2
 2019-10-17,00:16:21,632 ERROR [main-EventThread] 
org.apache.hadoop.hbase.master.HMaster: * ABORTING master 
tj1-hadoop-staging-ct02.kscn,22500,1571009509049: 
master:22500-0x46cfbd296b7e62b, quorum=tjwq02tst.zk.hadoop.srv:11000, 
baseZNode=/hbase/tjwq02tst-staging master:22500-0x46cfbd296b7e62b received 
expired from ZooKeeper, aborting *
 org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
= Session expired
         at 
org.apache.hadoop.hbase.zookeeper.ZKWatcher.connectionEvent(ZKWatcher.java:563)
         at 
org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:493)
         at 
org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)
         at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
         at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
{quote}
But master process stayed here and didn't exit(Some threads in are not daemon 
HBASE-22881). 
 When sent 'kill -9' to it, two threads exited and throwed Exceptions as 
follows.
 We can see they are fsck threads from the error log info "hbase-hbck.lock"
{quote}{color:#de350b}2019-10-17,10:14:00,332 WARN [Thread-7099] 
org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception{color}
 {color:#de350b}java.io.FileNotFoundException: File does not exist: 
/hbase/tjwq02tst-staging/.tmp/hbase-hbck.lock (inode 34898756) Holder 
DFSClient_NONMAPREDUCE_405679236_1 does not have any open files.{color}
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2955)
         at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:598)
         at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:173)
         at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2834)
         at 

[jira] [Updated] (HBASE-23186) Close dfs output stream in Fsck threads when master exit

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-23186:
---
Summary: Close dfs output stream in Fsck threads when master exit  (was: 
Fsck threads should close )

> Close dfs output stream in Fsck threads when master exit
> 
>
> Key: HBASE-23186
> URL: https://issues.apache.org/jira/browse/HBASE-23186
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> HBASE-21072 imported to use HBaseFsck as default in hbase2.
> {code:java}
> if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
>   HBaseFsck.checkAndMarkRunningHbck(this.conf,
>   HBaseFsck.createLockRetryCounterFactory(this.conf).create());
> }{code}
> But the fsck thread is not daemon,
> {code:java}
> public static Pair 
> checkAndMarkRunningHbck(Configuration conf,
> RetryCounter retryCounter) throws IOException {
>   FileLockCallable callable = new FileLockCallable(conf, retryCounter);
>   ExecutorService executor = Executors.newFixedThreadPool(1);
> ...{code}
> This will make JVM not exit.
> We should set it be daemon and close the dfs output stream when master 
> abort/stop.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23186) Close dfs output stream in Fsck threads when master exit

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-23186:
---
Description: 
HBASE-21072 imported to use HBaseFsck as default in hbase2.
{code:java}
if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
  HBaseFsck.checkAndMarkRunningHbck(this.conf,
  HBaseFsck.createLockRetryCounterFactory(this.conf).create());
}{code}
 

We should close the dfs output stream when master abort/stop.

 

 

  was:
HBASE-21072 imported to use HBaseFsck as default in hbase2.
{code:java}
if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
  HBaseFsck.checkAndMarkRunningHbck(this.conf,
  HBaseFsck.createLockRetryCounterFactory(this.conf).create());
}{code}
But the fsck thread is not daemon,
{code:java}
public static Pair 
checkAndMarkRunningHbck(Configuration conf,
RetryCounter retryCounter) throws IOException {
  FileLockCallable callable = new FileLockCallable(conf, retryCounter);
  ExecutorService executor = Executors.newFixedThreadPool(1);
...{code}
This will make JVM not exit.

We should set it be daemon and close the dfs output stream when master 
abort/stop.

 

 


> Close dfs output stream in Fsck threads when master exit
> 
>
> Key: HBASE-23186
> URL: https://issues.apache.org/jira/browse/HBASE-23186
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> HBASE-21072 imported to use HBaseFsck as default in hbase2.
> {code:java}
> if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
>   HBaseFsck.checkAndMarkRunningHbck(this.conf,
>   HBaseFsck.createLockRetryCounterFactory(this.conf).create());
> }{code}
>  
> We should close the dfs output stream when master abort/stop.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23186) Fsck threads should close

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-23186:
---
Summary: Fsck threads should close   (was: Fsck threads block master 
process exit)

> Fsck threads should close 
> --
>
> Key: HBASE-23186
> URL: https://issues.apache.org/jira/browse/HBASE-23186
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> HBASE-21072 imported to use HBaseFsck as default in hbase2.
> {code:java}
> if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
>   HBaseFsck.checkAndMarkRunningHbck(this.conf,
>   HBaseFsck.createLockRetryCounterFactory(this.conf).create());
> }{code}
> But the fsck thread is not daemon,
> {code:java}
> public static Pair 
> checkAndMarkRunningHbck(Configuration conf,
> RetryCounter retryCounter) throws IOException {
>   FileLockCallable callable = new FileLockCallable(conf, retryCounter);
>   ExecutorService executor = Executors.newFixedThreadPool(1);
> ...{code}
> This will make JVM not exit.
> We should set it be daemon and close the dfs output stream when master 
> abort/stop.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22881) Fix non-daemon threads in hbase server implementation

2019-10-18 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954537#comment-16954537
 ] 

Xiaolin Ha commented on HBASE-22881:


"pool-8-thread-6" #86205 prio=5 os_prio=0 tid=0x7fc7e80ab6d0 nid=0x61c8 
waiting on condition [0x7fc6d22b1000]
"pool-8-thread-5" #86204 prio=5 os_prio=0 tid=0x7fc7ec2d7890 nid=0x61c7 
waiting on condition [0x7fc6d22f2000]
"pool-8-thread-4" #86202 prio=5 os_prio=0 tid=0x7fc7f0135620 nid=0x61c6 
waiting on condition [0x7fc6d2333000]
"pool-5-thread-6" #86201 prio=5 os_prio=0 tid=0x7fc7781ba860 nid=0x61c5 
waiting on condition [0x7fc6d4804000]
"pool-5-thread-5" #86200 prio=5 os_prio=0 tid=0x7fc7781b9fd0 nid=0x61c4 
waiting on condition [0x7fc6d35fd000]
"pool-5-thread-4" #86199 prio=5 os_prio=0 tid=0x7fc7781fbf80 nid=0x61c3 
waiting on condition [0x7fc6d51eb000]
"pool-8-thread-3" #51597 prio=5 os_prio=0 tid=0x7fc8480d6ba0 nid=0x23211 
waiting on condition [0x7fc6fc656000]
"pool-8-thread-1" #51598 prio=5 os_prio=0 tid=0x7fc84c0cf8e0 nid=0x23210 
waiting on condition [0x7fc6fbbed000]
"pool-8-thread-2" #51599 prio=5 os_prio=0 tid=0x7fc8401318c0 nid=0x2320f 
waiting on condition [0x7fc6fbcb]
"pool-5-thread-3" #51596 prio=5 os_prio=0 tid=0x7fc77805fd50 nid=0x2320e 
waiting on condition [0x7fc6fba67000]
"pool-5-thread-2" #51595 prio=5 os_prio=0 tid=0x7fc778075e90 nid=0x2320d 
waiting on condition [0x7fc6fb79c000]
"pool-5-thread-1" #51594 prio=5 os_prio=0 tid=0x7fc77809baa0 nid=0x2320c 
waiting on condition [0x7fc6fd245000]
"Scheduler-936931778" #597 prio=5 os_prio=0 tid=0x7fc8240180c0 nid=0x220e1 
waiting on condition [0x7fc75eaca000]

> Fix non-daemon threads in hbase server implementation
> -
>
> Key: HBASE-22881
> URL: https://issues.apache.org/jira/browse/HBASE-22881
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
>
> "pool-8-thread-3" #7252 prio=5 os_prio=0 tid=0x7f91040044c0 nid=0xd71e 
> waiting on condition [0x7f8f4d209000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-2" #7251 prio=5 os_prio=0 tid=0x7f910c010be0 nid=0xd71d 
> waiting on condition [0x7f8f4daab000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-1" #7250 prio=5 os_prio=0 tid=0x7f9119d0 nid=0xd71c 
> waiting on condition [0x7f8f4da6a000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> 

[jira] [Reopened] (HBASE-22881) Fix non-daemon threads in hbase server implementation

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha reopened HBASE-22881:


> Fix non-daemon threads in hbase server implementation
> -
>
> Key: HBASE-22881
> URL: https://issues.apache.org/jira/browse/HBASE-22881
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.1, 2.1.7
>
>
> "pool-8-thread-3" #7252 prio=5 os_prio=0 tid=0x7f91040044c0 nid=0xd71e 
> waiting on condition [0x7f8f4d209000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-2" #7251 prio=5 os_prio=0 tid=0x7f910c010be0 nid=0xd71d 
> waiting on condition [0x7f8f4daab000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-8-thread-1" #7250 prio=5 os_prio=0 tid=0x7f9119d0 nid=0xd71c 
> waiting on condition [0x7f8f4da6a000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0e49ed0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
> - None
> "pool-5-thread-3" #7248 prio=5 os_prio=0 tid=0x7f9238005ad0 nid=0xd71a 
> waiting on condition [0x7f8f4cb65000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0005c0ec51e0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] binlijin opened a new pull request #732: HBASE-23187 Update parent region state to SPLIT in meta

2019-10-18 Thread GitBox
binlijin opened a new pull request #732: HBASE-23187 Update parent region state 
to SPLIT in meta
URL: https://github.com/apache/hbase/pull/732
 
 
   When split, the parent region set to SPLIT in memory, but the meta do not, 
so in some circumstances will bring back the parent region.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23187) Update parent region state to SPLIT in meta

2019-10-18 Thread Lijin Bin (Jira)
Lijin Bin created HBASE-23187:
-

 Summary: Update parent region state to SPLIT in meta
 Key: HBASE-23187
 URL: https://issues.apache.org/jira/browse/HBASE-23187
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 2.2.1
Reporter: Lijin Bin
Assignee: Lijin Bin


When split, the parent region set to SPLIT in memory, but the meta do not, so 
in some circumstances will bring back the parent region.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-19663) javadoc creation needs jsr305

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954506#comment-16954506
 ] 

Hudson commented on HBASE-19663:


Results for branch branch-1
[build #1110 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1110/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1110//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1110//JDK7_Nightly_Build_Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1110//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> javadoc creation needs jsr305
> -
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11, 1.5.1
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954501#comment-16954501
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #152 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/152/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/152//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/152//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/152//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/152//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23169) Random region server aborts while clearing Old Wals

2019-10-18 Thread Wellington Chevreuil (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954492#comment-16954492
 ] 

Wellington Chevreuil commented on HBASE-23169:
--

Hi [~KarthickRam], I noticed you marked this is affecting 1.4.11, but I 
couldn't reproduce this issue in my previous tests:
{noformat}
Tried active-active replication, set a table for replication on both, verified 
that replication is working both ways, then ran ltt for a new, not targeted to 
replication table, then verified oldWALs are getting deleted and log position 
reported by DumpReplicationQueues is updated even when we have only edits not 
targeted for replication. {noformat}

You mentioned you had applied HBASE-22784 patch on top of 1.4.10. Did it 
applied cleanly, or were there some conflicts? Have you also managed to 
reproduce this on a 1.4.11 deployment?

> Random region server aborts while clearing Old Wals
> ---
>
> Key: HBASE-23169
> URL: https://issues.apache.org/jira/browse/HBASE-23169
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Replication, wal
>Affects Versions: 1.4.10, 1.4.11
>Reporter: Karthick
>Assignee: Wellington Chevreuil
>Priority: Blocker
>  Labels: patch
>
> After applying the patch given in 
> [HBASE-22784|https://jira.apache.org/jira/browse/HBASE-22784] random region 
> server aborts were noticed. This happens in ReplicationResourceShipper thread 
> while writing the replication wal position.
> {code:java}
> 2019-10-05 08:17:28,132 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: ABORTING region server 
> 172.20.20.20,16020,1570193969775: Failed to write replication wal position 
> (filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
> position=127494739)2019-10-05 08:17:28,132 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: ABORTING region server 
> 172.20.20.20,16020,1570193969775: Failed to write replication wal position 
> (filename=172.20.20.20%2C16020%2C1570193969775.1570288637045, 
> position=127494739)org.apache.zookeeper.KeeperException$NoNodeException: 
> KeeperErrorCode = NoNode for 
> /hbase/replication/rs/172.20.20.20,16020,1570193969775/2/172.20.20.20%2C16020%2C1570193969775.1570288637045
>  at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at 
> org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1327) at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.setData(RecoverableZooKeeper.java:422)
>  at org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:824) at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:874) at 
> org.apache.hadoop.hbase.zookeeper.ZKUtil.setData(ZKUtil.java:868) at 
> org.apache.hadoop.hbase.replication.ReplicationQueuesZKImpl.setLogPosition(ReplicationQueuesZKImpl.java:155)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.logPositionAndCleanOldLogs(ReplicationSourceManager.java:194)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.updateLogPosition(ReplicationSource.java:727)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.shipEdits(ReplicationSource.java:698)
>  at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceShipperThread.run(ReplicationSource.java:551)2019-10-05
>  08:17:28,133 FATAL 
> [regionserver//172.20.20.20:16020.replicationSource.172.20.20.20%2C16020%2C1570193969775,2]
>  regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: 
> [org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23186) Fsck threads block master process exit

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-23186:
---
Summary: Fsck threads block master process exit  (was: Fsck threads block 
master's exiting)

> Fsck threads block master process exit
> --
>
> Key: HBASE-23186
> URL: https://issues.apache.org/jira/browse/HBASE-23186
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> HBASE-21072 imported to use HBaseFsck as default in hbase2.
> {code:java}
> if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
>   HBaseFsck.checkAndMarkRunningHbck(this.conf,
>   HBaseFsck.createLockRetryCounterFactory(this.conf).create());
> }{code}
> But the fsck thread is not daemon,
> {code:java}
> public static Pair 
> checkAndMarkRunningHbck(Configuration conf,
> RetryCounter retryCounter) throws IOException {
>   FileLockCallable callable = new FileLockCallable(conf, retryCounter);
>   ExecutorService executor = Executors.newFixedThreadPool(1);
> ...{code}
> This will make JVM not exit.
> We should set it be daemon and close the dfs output stream when master 
> abort/stop.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23186) Fsck threads block master's exiting

2019-10-18 Thread Xiaolin Ha (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-23186:
---
Summary: Fsck threads block master's exiting  (was: Set Fsck thread be 
daemon and close its OutputStream when master abort)

> Fsck threads block master's exiting
> ---
>
> Key: HBASE-23186
> URL: https://issues.apache.org/jira/browse/HBASE-23186
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> HBASE-21072 imported to use HBaseFsck as default in hbase2.
> {code:java}
> if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
>   HBaseFsck.checkAndMarkRunningHbck(this.conf,
>   HBaseFsck.createLockRetryCounterFactory(this.conf).create());
> }{code}
> But the fsck thread is not daemon,
> {code:java}
> public static Pair 
> checkAndMarkRunningHbck(Configuration conf,
> RetryCounter retryCounter) throws IOException {
>   FileLockCallable callable = new FileLockCallable(conf, retryCounter);
>   ExecutorService executor = Executors.newFixedThreadPool(1);
> ...{code}
> This will make JVM not exit.
> We should set it be daemon and close the dfs output stream when master 
> abort/stop.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23186) Set Fsck thread be daemon and close its OutputStream when master abort

2019-10-18 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954433#comment-16954433
 ] 

Xiaolin Ha commented on HBASE-23186:


ZK session expired,and master aborted.
{quote}2019-10-16,23:49:41,611 INFO 
[main-SendThread(tj1-hadoop-staging-ct05.kscn:11000)] 
org.apache.zookeeper.ClientCnxn: Session establishment complete on server 
tj1-hadoop-staging-ct05.kscn/10.38.166.12:11000, sessionid = 0x46cfbd296b7e62b, 
negotiated timeout = 2
2019-10-17,00:15:26,253 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
2019-10-17,00:15:37,357 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
2019-10-17,00:15:48,168 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
2019-10-17,00:15:50,285 INFO 
[master/tj1-hadoop-staging-ct02:22500.splitLogManager..Chore.1] 
org.apache.hadoop.hbase.ScheduledChore: Chore: SplitLogManager Timeout Monitor 
missed its start time
2019-10-17,00:15:57,972 INFO 
[main-SendThread(tj1-hadoop-staging-ct05.kscn:11000)] 
org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from 
server in 24963ms for sessionid 0x46cfbd296b7e62b, closing socket connection 
and attempting reconnect
2019-10-17,00:15:59,505 WARN [master/tj1-hadoop-staging-ct02:22500] 
org.apache.hadoop.hbase.util.Sleeper: We slept 24551ms instead of 3000ms, this 
is likely due to a long garbage collecting pause and it's usually bad, see 
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2019-10-17,00:16:01,733 INFO 
[master/tj1-hadoop-staging-ct02:22500:becomeActiveMaster-SendThread(tj1-hadoop-staging-ct02.kscn:11000)]
 org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from 
server in 25436ms for sessionid 0x26cfbd28d32ffc0, closing socket connection 
and attempting reconnect
2019-10-17,00:16:21,558 ERROR [main-EventThread] 
org.apache.hadoop.hbase.master.HMaster: Master server abort: loaded 
coprocessors are: [org.apache.hadoop.hbase.security.access.AccessController, 
org.apache.hadoop.hbase.security.access.SnapshotScannerHDFSAclController, 
org.apache.hadoop.hbase.quotas.MasterQuotasObserver, 
org.apache.hadoop.hbase.master.ThemisMasterObserver]
2019-10-17,00:16:21,595 INFO 
[ReadOnlyZKClient-tjwq02tst.zk.hadoop.srv:11000@0x62a10a8c-SendThread(tj1-hadoop-staging-ct04.kscn:11000)]
 org.apache.zookeeper.ClientCnxn: Session establishment complete on server 
tj1-hadoop-staging-ct04.kscn/10.38.162.36:11000, sessionid = 0x36cfbd28d810e26, 
negotiated timeout = 2
2019-10-17,00:16:21,632 ERROR [main-EventThread] 
org.apache.hadoop.hbase.master.HMaster: * ABORTING master 
tj1-hadoop-staging-ct02.kscn,22500,1571009509049: 
master:22500-0x46cfbd296b7e62b, quorum=tjwq02tst.zk.hadoop.srv:11000, 
baseZNode=/hbase/tjwq02tst-staging master:22500-0x46cfbd296b7e62b received 
expired from ZooKeeper, aborting *
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired
        at 
org.apache.hadoop.hbase.zookeeper.ZKWatcher.connectionEvent(ZKWatcher.java:563)
        at 
org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:493)
        at 
org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:40)
        at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
        at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498){quote}
But master process stayed here and didn't exit.
When sent 'kill -9' to it, two threads exited and throwed Exceptions as follows.
We can see they are fsck threads from the error log info "hbase-hbck.lock"
{quote}{color:#de350b}2019-10-17,10:14:00,332 WARN [Thread-7099] 
org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception{color}
{color:#de350b}java.io.FileNotFoundException: File does not exist: 
/hbase/tjwq02tst-staging/.tmp/hbase-hbck.lock (inode 34898756) Holder 
DFSClient_NONMAPREDUCE_405679236_1 does not have any open files.{color}
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2955)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:598)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:173)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2834)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:979)
        at 

[jira] [Commented] (HBASE-23055) Alter hbase:meta

2019-10-18 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954432#comment-16954432
 ] 

Hudson commented on HBASE-23055:


Results for branch HBASE-23055
[build #18 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/18/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/18//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/18//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/18//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Alter hbase:meta
> 
>
> Key: HBASE-23055
> URL: https://issues.apache.org/jira/browse/HBASE-23055
> Project: HBase
>  Issue Type: Task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0
>
>
> hbase:meta is currently hardcoded. Its schema cannot be change.
> This issue is about allowing edits to hbase:meta schema. It will allow our 
> being able to set encodings such as the block-with-indexes which will help 
> quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first 
> step on road to being able to split meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #730: HBASE-23184 The HeapAllocation in WebUI is not accurate

2019-10-18 Thread GitBox
Apache-HBase commented on issue #730: HBASE-23184 The HeapAllocation in WebUI 
is not accurate
URL: https://github.com/apache/hbase/pull/730#issuecomment-543629628
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 40s |  master passed  |
   | :green_heart: |  compile  |   1m 13s |  master passed  |
   | :green_heart: |  checkstyle  |   2m  3s |  master passed  |
   | :green_heart: |  shadedjars  |   7m 45s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 10s |  master passed  |
   | :blue_heart: |  spotbugs  |   9m 15s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   9m  9s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   8m 45s |  the patch passed  |
   | :green_heart: |  compile  |   2m  5s |  the patch passed  |
   | :green_heart: |  javac  |   2m  5s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 42s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   8m  6s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  27m 35s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 271m 42s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 31s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 364m 41s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hbase.regionserver.TestTags |
   |   | hadoop.hbase.replication.regionserver.TestWALEntryStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/730 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 8477055db98e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-730/out/precommit/personality/provided.sh
 |
   | git revision | master / 946f1e9e25 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/1/testReport/
 |
   | Max. process+thread count | 4272 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-730/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23063) Add an option to enable multiget in parallel

2019-10-18 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954402#comment-16954402
 ] 

Xiaoqiao He commented on HBASE-23063:
-

Hi [~javaman_chen], [~stack],[~ram_krish], thanks for your works, any progress 
here? I believe this improvement is very useful for some scenarios about 
multiget. Thanks again.

> Add an option to enable multiget in parallel
> 
>
> Key: HBASE-23063
> URL: https://issues.apache.org/jira/browse/HBASE-23063
> Project: HBase
>  Issue Type: Improvement
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>
> Currently, multiget operation will be processed serially on the server side, 
> RSRpcServices#multi will handle the Action one by one. We can add an option 
> to handle this in parallel, just like what parallel seek doing. In some 
> scenarios, this can improve multiget performance a lot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23186) Set Fsck thread be daemon and close its OutputStream when master abort

2019-10-18 Thread Xiaolin Ha (Jira)
Xiaolin Ha created HBASE-23186:
--

 Summary: Set Fsck thread be daemon and close its OutputStream when 
master abort
 Key: HBASE-23186
 URL: https://issues.apache.org/jira/browse/HBASE-23186
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: Xiaolin Ha
Assignee: Xiaolin Ha


HBASE-21072 imported to use HBaseFsck as default in hbase2.
{code:java}
if (this.conf.getBoolean("hbase.write.hbck1.lock.file", true)) {
  HBaseFsck.checkAndMarkRunningHbck(this.conf,
  HBaseFsck.createLockRetryCounterFactory(this.conf).create());
}{code}
But the fsck thread is not daemon,
{code:java}
public static Pair 
checkAndMarkRunningHbck(Configuration conf,
RetryCounter retryCounter) throws IOException {
  FileLockCallable callable = new FileLockCallable(conf, retryCounter);
  ExecutorService executor = Executors.newFixedThreadPool(1);
...{code}
This will make JVM not exit.

We should set it be daemon and close the dfs output stream when master 
abort/stop.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil commented on issue #712: HBASE-23136

2019-10-18 Thread GitBox
wchevreuil commented on issue #712: HBASE-23136
URL: https://github.com/apache/hbase/pull/712#issuecomment-543585550
 
 
   Thanks @joshelser , the test failure seems unrelated. Had pushed another 
commit addressing the reported checkstyle issues. I will wait for the 
pre-commit results, if all goes well, will go ahead and merge this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22749) Distributed MOB compactions

2019-10-18 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954336#comment-16954336
 ] 

HBase QA commented on HBASE-22749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:orange}-0{color} | {color:orange} patch {color} | {color:orange}  4m  
9s{color} | {color:orange} Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
25s{color} | {color:red} hbase-server: The patch generated 17 new + 308 
unchanged - 47 fixed = 325 total (was 355) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 14 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 40s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
21s{color} | {color:red} hbase-server generated 4 new + 0 unchanged - 0 fixed = 
4 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}334m  5s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}393m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Possible null pointer dereference of mobRefData in 
org.apache.hadoop.hbase.master.MobFileCleanerChore.cleanupObsoleteMobFiles(Configuration,
 TableName)  Dereferenced at MobFileCleanerChore.java:mobRefData in 
org.apache.hadoop.hbase.master.MobFileCleanerChore.cleanupObsoleteMobFiles(Configuration,
 TableName)  Dereferenced at MobFileCleanerChore.java:[line 176] |
|  |  org.apache.hadoop.hbase.mob.FileSelection 

[GitHub] [hbase] Apache-HBase commented on issue #623: HBASE-22749: Distributed MOB compactions

2019-10-18 Thread GitBox
Apache-HBase commented on issue #623: HBASE-22749: Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#issuecomment-543563037
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 30s |  master passed  |
   | :green_heart: |  compile  |   0m 56s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 28s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 39s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  2s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   3m 59s |  master passed  |
   | :yellow_heart: |  patch  |   4m  9s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m  2s |  the patch passed  |
   | :green_heart: |  compile  |   0m 58s |  the patch passed  |
   | :green_heart: |  javac  |   0m 58s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   1m 25s |  hbase-server: The patch 
generated 17 new + 308 unchanged - 47 fixed = 325 total (was 355)  |
   | :broken_heart: |  whitespace  |   0m  0s |  The patch has 14 line(s) that 
end in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | :green_heart: |  shadedjars  |   4m 37s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 40s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 34s |  the patch passed  |
   | :broken_heart: |  findbugs  |   4m 21s |  hbase-server generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0)  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 334m  5s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 393m 29s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Possible null pointer dereference of mobRefData in 
org.apache.hadoop.hbase.master.MobFileCleanerChore.cleanupObsoleteMobFiles(Configuration,
 TableName)  Dereferenced at MobFileCleanerChore.java:mobRefData in 
org.apache.hadoop.hbase.master.MobFileCleanerChore.cleanupObsoleteMobFiles(Configuration,
 TableName)  Dereferenced at MobFileCleanerChore.java:[line 176] |
   |  |  org.apache.hadoop.hbase.mob.FileSelection defines 
compareTo(FileSelection) and uses Object.equals()  At 
DefaultMobStoreCompactor.java:Object.equals()  At 
DefaultMobStoreCompactor.java:[lines 615-620] |
   |  |  org.apache.hadoop.hbase.mob.Generation defines compareTo(Generation) 
and uses Object.equals()  At DefaultMobStoreCompactor.java:Object.equals()  At 
DefaultMobStoreCompactor.java:[lines 793-798] |
   |  |  Unused field:DefaultMobStoreCompactor.java |
   | Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   |   | hadoop.hbase.client.TestSnapshotTemporaryDirectory |
   |   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
   |   | hadoop.hbase.util.TestFromClientSide3WoUnsafe |
   |   | hadoop.hbase.client.TestAsyncRegionAdminApi |
   |   | hadoop.hbase.replication.TestReplicationSmallTestsSync |
   |   | hadoop.hbase.client.TestAsyncTableAdminApi |
   |   | hadoop.hbase.client.TestFromClientSide |
   |   | hadoop.hbase.replication.TestReplicationSmallTests |
   |   | hadoop.hbase.client.TestCloneSnapshotFromClientNormal |
   |   | hadoop.hbase.client.TestAsyncSnapshotAdminApi |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/623 |
   | JIRA Issue | HBASE-22749 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 36570ebe90ee 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[jira] [Commented] (HBASE-22991) Release 1.4.11

2019-10-18 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954322#comment-16954322
 ] 

Sean Busbey commented on HBASE-22991:
-

RC0 is up: https://s.apache.org/hbase-1.4.11-rc0-vote

> Release 1.4.11
> --
>
> Key: HBASE-22991
> URL: https://issues.apache.org/jira/browse/HBASE-22991
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11
>
> Attachments: Flaky_20Test_20Report.zip
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)