[jira] [Updated] (HBASE-16561) Add metrics about read/write/scan queue length and active read/write/scan handler count
[ https://issues.apache.org/jira/browse/HBASE-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-16561: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to branch-1. Thanks [~zghaobac] for the contribution. Thanks [~mbertozzi] for the reviewing. > Add metrics about read/write/scan queue length and active read/write/scan > handler count > --- > > Key: HBASE-16561 > URL: https://issues.apache.org/jira/browse/HBASE-16561 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC, metrics >Affects Versions: 2.0.0, 1.4.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16561-branch-1.patch, HBASE-16561-v1.patch, > HBASE-16561.patch > > > Now there are only metrics about total queue length and active rpc handler > count. But in the RWQueueRpcExecutor, there are different queues and handlers > for read/write/scan request. I thought it is necessary to add more metrics > for RWQueueRpcExecutor. When use it in production cluster, we can adjust the > config of queues and handlers according to the metrics. > Review url: https://reviews.apache.org/r/54072/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16561) Add metrics about read/write/scan queue length and active read/write/scan handler count
[ https://issues.apache.org/jira/browse/HBASE-16561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704619#comment-15704619 ] Guanghao Zhang commented on HBASE-16561: Thanks all for reviewing. > Add metrics about read/write/scan queue length and active read/write/scan > handler count > --- > > Key: HBASE-16561 > URL: https://issues.apache.org/jira/browse/HBASE-16561 > Project: HBase > Issue Type: Improvement > Components: IPC/RPC, metrics >Affects Versions: 2.0.0, 1.4.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16561-branch-1.patch, HBASE-16561-v1.patch, > HBASE-16561.patch > > > Now there are only metrics about total queue length and active rpc handler > count. But in the RWQueueRpcExecutor, there are different queues and handlers > for read/write/scan request. I thought it is necessary to add more metrics > for RWQueueRpcExecutor. When use it in production cluster, we can adjust the > config of queues and handlers according to the metrics. > Review url: https://reviews.apache.org/r/54072/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16302) age of last shipped op and age of last applied op should be histograms
[ https://issues.apache.org/jira/browse/HBASE-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashish Singhi updated HBASE-16302: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 1.3.1) Status: Resolved (was: Patch Available) I have committed this to master and branch-1. Thanks for the patch, Ashu. Thanks for the review, Ted. > age of last shipped op and age of last applied op should be histograms > -- > > Key: HBASE-16302 > URL: https://issues.apache.org/jira/browse/HBASE-16302 > Project: HBase > Issue Type: Improvement > Components: Replication >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16302.patch.v0.patch > > > Replication exports metric ageOfLastShippedOp as an indication of how much > replication is lagging. But, with multiwal enabled, it's not representative > because replication could be lagging for a long time for one wal group > (something wrong with a particular region) while being fine for others. The > ageOfLastShippedOp becomes a useless metric for alerting in such a case. > Also, since there is no mapping between individual replication sources and > replication sinks, the age of last applied op can be a highly spiky metric if > only certain replication sources are lagging. > We should use histograms for these metrics and use maximum value of this > histogram to report replication lag when building stats. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706684#comment-15706684 ] Vladimir Rodionov edited comment on HBASE-17151 at 11/29/16 10:06 PM: -- Patch v1. [~anoop.hbase] can you take a look? cc: [~enis] was (Author: vrodionov): Patch v1. [~anoop.hbase] can you take a look? > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17151-v1.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17151: -- Status: Patch Available (was: Open) > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17151-v1.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16941) FavoredNodes - Split/Merge code paths
[ https://issues.apache.org/jira/browse/HBASE-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16941: - Status: Open (was: Patch Available) Looks like https://builds.apache.org/view/All/job/PreCommit-HBASE-Build/4685/console failed, unrelated to this patch. Couple of other builds also failed around this time. Will reupload. > FavoredNodes - Split/Merge code paths > - > > Key: HBASE-16941 > URL: https://issues.apache.org/jira/browse/HBASE-16941 > Project: HBase > Issue Type: Sub-task >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0 > > Attachments: HBASE-16941.master.001.patch, > HBASE-16941.master.002.patch, HBASE-16941.master.003.patch, > HBASE-16941.master.004.patch, HBASE-16941.master.005.patch, > HBASE-16941.master.006.patch, HBASE-16941.master.007.patch, > HBASE-16941.master.008.patch > > > This jira is to deal with the split/merge logic discussed as part of > HBASE-15532. The design document can be seen at HBASE-15531. The specific > changes are: > Split and merged regions should inherit favored node information from parent > regions. For splits also include some randomness so even if there are > subsequent splits, the regions will be more or less distributed. For split, > we include 2 FN from the parent and generate one random node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17199) Back-port HBASE-17151 to HBASE-7912 branch
[ https://issues.apache.org/jira/browse/HBASE-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17199: -- Status: Patch Available (was: Open) > Back-port HBASE-17151 to HBASE-7912 branch > -- > > Key: HBASE-17199 > URL: https://issues.apache.org/jira/browse/HBASE-17199 > Project: HBase > Issue Type: Task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17199-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17199) Back-port HBASE-17151 to HBASE-7912 branch
[ https://issues.apache.org/jira/browse/HBASE-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17199: -- Attachment: HBASE-17199-v1.patch Patch v1. cc: [~enis], [~tedyu] > Back-port HBASE-17151 to HBASE-7912 branch > -- > > Key: HBASE-17199 > URL: https://issues.apache.org/jira/browse/HBASE-17199 > Project: HBase > Issue Type: Task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17199-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16904) remove directory layout / fs references from snapshots
[ https://issues.apache.org/jira/browse/HBASE-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706887#comment-15706887 ] Umesh Agashe commented on HBASE-16904: -- Thanks for reviewing the changes [~busbey]! The addendum changes looks good to me. > remove directory layout / fs references from snapshots > -- > > Key: HBASE-16904 > URL: https://issues.apache.org/jira/browse/HBASE-16904 > Project: HBase > Issue Type: Sub-task > Components: Filesystem Integration >Reporter: Sean Busbey >Assignee: Umesh Agashe > Attachments: HBASE-16904-hbase-14439.v1.patch, > HBASE-16904-hbase-14439.v2.patch, HBASE-16904-hbase-14439.v3.patch, > HBASE-16904-hbase-14439.v4.patch > > > ensure snapshot code works through the MasterStorage / RegionStorage APIs and > not directly on backing filesystem. > {code} > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/DisabledTableSnapshotHandler.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotHFileCleaner.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17201) Edit of HFileBlock comments and javadoc
[ https://issues.apache.org/jira/browse/HBASE-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17201: -- Attachment: HBASE-17201.master.001.patch > Edit of HFileBlock comments and javadoc > --- > > Key: HBASE-17201 > URL: https://issues.apache.org/jira/browse/HBASE-17201 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: HBASE-17201.master.001.patch > > > Spent time in HFileBlock trying to do the parent issue. Failed. But did a > bunch of edits of comments/javadoc. Let me get those in at least. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17185) Purge the seek of the next block reading HFileBlocks
[ https://issues.apache.org/jira/browse/HBASE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17185: -- Priority: Minor (was: Major) > Purge the seek of the next block reading HFileBlocks > > > Key: HBASE-17185 > URL: https://issues.apache.org/jira/browse/HBASE-17185 > Project: HBase > Issue Type: Improvement > Components: HFile >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Minor > Labels: beginner > Fix For: 2.0.0 > > Attachments: HBASE-17185.master.001.patch, HBASE-17185.patch > > > When we read HFileBlocks, we read the asked-for block AND the next block's > header which we add to a cache (see HBASE-17072). We do this extra read to > get the next block's length purportedly. This seek of the next block's header > complicates the HFileBlock construction (not to mind other consequences -- > again see HBASE-17072). > Study done in HBASE-17072 shows that we normally do not need this extra read > of the next block's header. In the usual case, the length of the block is > gotten from the hfile index. > A simplification of block reading can be done purging this extra header read. > We can also save some space in cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17185) Purge the seek of the next block reading HFileBlocks
[ https://issues.apache.org/jira/browse/HBASE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17185: -- Labels: beginner (was: ) > Purge the seek of the next block reading HFileBlocks > > > Key: HBASE-17185 > URL: https://issues.apache.org/jira/browse/HBASE-17185 > Project: HBase > Issue Type: Improvement > Components: HFile >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Minor > Labels: beginner > Fix For: 2.0.0 > > Attachments: HBASE-17185.master.001.patch, HBASE-17185.patch > > > When we read HFileBlocks, we read the asked-for block AND the next block's > header which we add to a cache (see HBASE-17072). We do this extra read to > get the next block's length purportedly. This seek of the next block's header > complicates the HFileBlock construction (not to mind other consequences -- > again see HBASE-17072). > Study done in HBASE-17072 shows that we normally do not need this extra read > of the next block's header. In the usual case, the length of the block is > gotten from the hfile index. > A simplification of block reading can be done purging this extra header read. > We can also save some space in cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-16295) InvalidFamilyOperationException while deleting a column family in shell
[ https://issues.apache.org/jira/browse/HBASE-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri resolved HBASE-16295. --- Resolution: Cannot Reproduce > InvalidFamilyOperationException while deleting a column family in shell > --- > > Key: HBASE-16295 > URL: https://issues.apache.org/jira/browse/HBASE-16295 > Project: HBase > Issue Type: Bug > Components: master, shell >Affects Versions: 1.2.0 >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri >Priority: Minor > > The column family exists and is actually deleted, the regions are also > reopened. But, the following exception is thrown in the shell: > {code} > alter 't1', 'delete' => 'cf' > ERROR: org.apache.hadoop.hbase.InvalidFamilyOperationException: Family 'cf' > does not exist, so it cannot be deleted > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45) > at > org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114) > at > org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85) > at > org.apache.hadoop.hbase.master.HMaster.deleteColumn(HMaster.java:1916) > at > org.apache.hadoop.hbase.master.MasterRpcServices.deleteColumn(MasterRpcServices.java:474) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55658) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.InvalidFamilyOperationException): > Family 'cf' does not exist, so it cannot be deleted > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.prepareDelete(DeleteColumnFamilyProcedure.java:281) > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.executeFromState(DeleteColumnFamilyProcedure.java:93) > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.executeFromState(DeleteColumnFamilyProcedure.java:48) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:465) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17197) hfile does not work in 2.0
[ https://issues.apache.org/jira/browse/HBASE-17197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707033#comment-15707033 ] stack commented on HBASE-17197: --- Seems silly having to pass file w/ a -f > hfile does not work in 2.0 > -- > > Key: HBASE-17197 > URL: https://issues.apache.org/jira/browse/HBASE-17197 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun > > I tried to use hfile in master branch, it does not print out kv pairs or meta > as it is supposed to be. > {code} > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile > file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/ > 53e9f9bc328f468b87831221de3a0c74 bdc6e1c4eea246a99e989e02d554cb03 > bf9275ac418d4d458904d81137e82683 > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile > file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/bf9275ac418d4d458904d81137e82683 > -m > 2016-11-29 12:25:22,019 WARN [main] util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile > file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/bf9275ac418d4d458904d81137e82683 > -p > 2016-11-29 12:25:27,333 WARN [main] util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Scanned kv count -> 0 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706684#comment-15706684 ] Vladimir Rodionov edited comment on HBASE-17151 at 11/29/16 10:05 PM: -- Patch v1. [~anoop.hbase] can you take a look? was (Author: vrodionov): Patch v1. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17151-v1.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16904) remove directory layout / fs references from snapshots
[ https://issues.apache.org/jira/browse/HBASE-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706705#comment-15706705 ] Sean Busbey commented on HBASE-16904: - I'm a bit stumped on the javadoc warnings. they don't happen locally with javadoc:javadoc. > remove directory layout / fs references from snapshots > -- > > Key: HBASE-16904 > URL: https://issues.apache.org/jira/browse/HBASE-16904 > Project: HBase > Issue Type: Sub-task > Components: Filesystem Integration >Reporter: Sean Busbey >Assignee: Umesh Agashe > Attachments: HBASE-16904-hbase-14439.v1.patch, > HBASE-16904-hbase-14439.v2.patch, HBASE-16904-hbase-14439.v3.patch, > HBASE-16904-hbase-14439.v4.patch > > > ensure snapshot code works through the MasterStorage / RegionStorage APIs and > not directly on backing filesystem. > {code} > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/DisabledTableSnapshotHandler.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotHFileCleaner.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17199) Back-port HBASE-17151 to HBASE-7912 branch
[ https://issues.apache.org/jira/browse/HBASE-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17199: -- Description: HBASE-17151 introduces new API to read HFile w/o instantiating block cache. > Back-port HBASE-17151 to HBASE-7912 branch > -- > > Key: HBASE-17199 > URL: https://issues.apache.org/jira/browse/HBASE-17199 > Project: HBase > Issue Type: Task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17199-v1.patch > > > HBASE-17151 introduces new API to read HFile w/o instantiating block cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707038#comment-15707038 ] Hadoop QA commented on HBASE-17151: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 50s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 40s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 24m 13s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 49s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 140m 27s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840935/HBASE-17151-v2.patch | | JIRA Issue | HBASE-17151 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 0e1dee782360 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 7c43a23 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4690/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4690/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch >
[jira] [Commented] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707071#comment-15707071 ] Hadoop QA commented on HBASE-17151: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 4s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 19s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 11s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 92m 4s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 144m 52s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840935/HBASE-17151-v2.patch | | JIRA Issue | HBASE-17151 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux adc3ba03defc 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 7c43a23 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4691/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4691/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch >
[jira] [Updated] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17151: -- Attachment: HBASE-17151-v1.patch Patch v1. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17151-v1.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16941) FavoredNodes - Split/Merge code paths
[ https://issues.apache.org/jira/browse/HBASE-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16941: - Status: Patch Available (was: Open) > FavoredNodes - Split/Merge code paths > - > > Key: HBASE-16941 > URL: https://issues.apache.org/jira/browse/HBASE-16941 > Project: HBase > Issue Type: Sub-task >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0 > > Attachments: HBASE-16941.master.001.patch, > HBASE-16941.master.002.patch, HBASE-16941.master.003.patch, > HBASE-16941.master.004.patch, HBASE-16941.master.005.patch, > HBASE-16941.master.006.patch, HBASE-16941.master.007.patch, > HBASE-16941.master.008.patch, HBASE-16941.master.009.patch > > > This jira is to deal with the split/merge logic discussed as part of > HBASE-15532. The design document can be seen at HBASE-15531. The specific > changes are: > Split and merged regions should inherit favored node information from parent > regions. For splits also include some randomness so even if there are > subsequent splits, the regions will be more or less distributed. For split, > we include 2 FN from the parent and generate one random node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17199) Back-port HBASE-17151 to HBASE-7912 branch
Vladimir Rodionov created HBASE-17199: - Summary: Back-port HBASE-17151 to HBASE-7912 branch Key: HBASE-17199 URL: https://issues.apache.org/jira/browse/HBASE-17199 Project: HBase Issue Type: Task Reporter: Vladimir Rodionov Assignee: Vladimir Rodionov -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706896#comment-15706896 ] Andrew Purtell commented on HBASE-17192: +1 Yuck > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17192.1.patch, HBASE-17192.1.test.patch > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16904) remove directory layout / fs references from snapshots
[ https://issues.apache.org/jira/browse/HBASE-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706926#comment-15706926 ] Umesh Agashe commented on HBASE-16904: -- +1 > remove directory layout / fs references from snapshots > -- > > Key: HBASE-16904 > URL: https://issues.apache.org/jira/browse/HBASE-16904 > Project: HBase > Issue Type: Sub-task > Components: Filesystem Integration >Reporter: Sean Busbey >Assignee: Umesh Agashe > Attachments: HBASE-16904-hbase-14439.v1.patch, > HBASE-16904-hbase-14439.v2.patch, HBASE-16904-hbase-14439.v3.patch, > HBASE-16904-hbase-14439.v4.patch > > > ensure snapshot code works through the MasterStorage / RegionStorage APIs and > not directly on backing filesystem. > {code} > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/DisabledTableSnapshotHandler.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotHFileCleaner.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17201) Edit of HFileBlock comments and javadoc
stack created HBASE-17201: - Summary: Edit of HFileBlock comments and javadoc Key: HBASE-17201 URL: https://issues.apache.org/jira/browse/HBASE-17201 Project: HBase Issue Type: Sub-task Components: documentation Reporter: stack Assignee: stack Spent time in HFileBlock trying to do the parent issue. Failed. But did a bunch of edits of comments/javadoc. Let me get those in at least. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16209) Provide an ExponentialBackOffPolicy sleep between failed region open requests
[ https://issues.apache.org/jira/browse/HBASE-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri updated HBASE-16209: -- Attachment: HBASE-16209-addendum.v6.branch-1.patch This has been open for a while now. I can't repro the findbugs warning locally, and can't really see why it's complaining! HBASE-16209-addendum.v6.branch-1: Rebasing the patch to current branch-1. > Provide an ExponentialBackOffPolicy sleep between failed region open requests > - > > Key: HBASE-16209 > URL: https://issues.apache.org/jira/browse/HBASE-16209 > Project: HBase > Issue Type: Bug >Reporter: Joseph >Assignee: Ashu Pachauri > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16209-addendum.patch, > HBASE-16209-addendum.v6.branch-1.patch, > HBASE-16209-branch-1-addendum-v2.patch, HBASE-16209-branch-1-addendum.patch, > HBASE-16209-branch-1-v3.patch, HBASE-16209-branch-1-v4.patch, > HBASE-16209-branch-1-v5.patch, HBASE-16209-branch-1.patch, > HBASE-16209-v2.patch, HBASE-16209.patch > > > Related to HBASE-16138. As of now we currently have no pause between retrying > failed region open requests. And with a low maximumAttempt default, we can > quickly use up all our regionOpen retries if the server is in a bad state. I > added in a ExponentialBackOffPolicy so that we spread out the timing of our > open region retries in AssignmentManager. Review board at > https://reviews.apache.org/r/50011/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17185) Purge the seek of the next block reading HFileBlocks
[ https://issues.apache.org/jira/browse/HBASE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706992#comment-15706992 ] stack commented on HBASE-17185: --- Did some study. The read-of-the-next-blocks header is used only in the rare case where we are loading metadata on file open. Metadata includes hfile indices themselves stored as blocks. In this opening case, we do not have an hfile index to get block lengths from (we do not have an index for the indices -- TODO). There are three or so metadata blocks in the normal case. We could double the seeks done for the file open case doing a seek for the header to get lengths and then body for each metablock and I think it'd be fine given these are not real 'seeks' but just read-forwards in an already loaded hdfs stream but I did not do the work to prove this assertion. Needs a bit of work comparing before and after. I looked at undoing the read-ahead into the next block for all but the startup case and it'd involve code duplication and would undo much of the simplification/benefit the attached patch brings. Putting aside for now until time to do the perf/resource compare (though in a subtask, have updated HFileBlock doc w/o changing functionality to inculcate findings of my study). > Purge the seek of the next block reading HFileBlocks > > > Key: HBASE-17185 > URL: https://issues.apache.org/jira/browse/HBASE-17185 > Project: HBase > Issue Type: Improvement > Components: HFile >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack >Priority: Minor > Labels: beginner > Fix For: 2.0.0 > > Attachments: HBASE-17185.master.001.patch, HBASE-17185.patch > > > When we read HFileBlocks, we read the asked-for block AND the next block's > header which we add to a cache (see HBASE-17072). We do this extra read to > get the next block's length purportedly. This seek of the next block's header > complicates the HFileBlock construction (not to mind other consequences -- > again see HBASE-17072). > Study done in HBASE-17072 shows that we normally do not need this extra read > of the next block's header. In the usual case, the length of the block is > gotten from the hfile index. > A simplification of block reading can be done purging this extra header read. > We can also save some space in cache. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16295) InvalidFamilyOperationException while deleting a column family in shell
[ https://issues.apache.org/jira/browse/HBASE-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707012#comment-15707012 ] Ashu Pachauri commented on HBASE-16295: --- I also tried reproducing this but without any luck. Since it has happened only once and we don't have a repro, it seems super rare. I am going to close it. > InvalidFamilyOperationException while deleting a column family in shell > --- > > Key: HBASE-16295 > URL: https://issues.apache.org/jira/browse/HBASE-16295 > Project: HBase > Issue Type: Bug > Components: master, shell >Affects Versions: 1.2.0 >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri >Priority: Minor > > The column family exists and is actually deleted, the regions are also > reopened. But, the following exception is thrown in the shell: > {code} > alter 't1', 'delete' => 'cf' > ERROR: org.apache.hadoop.hbase.InvalidFamilyOperationException: Family 'cf' > does not exist, so it cannot be deleted > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45) > at > org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114) > at > org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85) > at > org.apache.hadoop.hbase.master.HMaster.deleteColumn(HMaster.java:1916) > at > org.apache.hadoop.hbase.master.MasterRpcServices.deleteColumn(MasterRpcServices.java:474) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55658) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.InvalidFamilyOperationException): > Family 'cf' does not exist, so it cannot be deleted > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.prepareDelete(DeleteColumnFamilyProcedure.java:281) > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.executeFromState(DeleteColumnFamilyProcedure.java:93) > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.executeFromState(DeleteColumnFamilyProcedure.java:48) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:465) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-16295) InvalidFamilyOperationException while deleting a column family in shell
[ https://issues.apache.org/jira/browse/HBASE-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri reassigned HBASE-16295: - Assignee: Ashu Pachauri > InvalidFamilyOperationException while deleting a column family in shell > --- > > Key: HBASE-16295 > URL: https://issues.apache.org/jira/browse/HBASE-16295 > Project: HBase > Issue Type: Bug > Components: master, shell >Affects Versions: 1.2.0 >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri >Priority: Minor > > The column family exists and is actually deleted, the regions are also > reopened. But, the following exception is thrown in the shell: > {code} > alter 't1', 'delete' => 'cf' > ERROR: org.apache.hadoop.hbase.InvalidFamilyOperationException: Family 'cf' > does not exist, so it cannot be deleted > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95) > at > org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45) > at > org.apache.hadoop.hbase.procedure2.RemoteProcedureException.fromProto(RemoteProcedureException.java:114) > at > org.apache.hadoop.hbase.master.procedure.ProcedureSyncWait.waitForProcedureToComplete(ProcedureSyncWait.java:85) > at > org.apache.hadoop.hbase.master.HMaster.deleteColumn(HMaster.java:1916) > at > org.apache.hadoop.hbase.master.MasterRpcServices.deleteColumn(MasterRpcServices.java:474) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55658) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:137) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:112) > at java.lang.Thread.run(Thread.java:745) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.InvalidFamilyOperationException): > Family 'cf' does not exist, so it cannot be deleted > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.prepareDelete(DeleteColumnFamilyProcedure.java:281) > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.executeFromState(DeleteColumnFamilyProcedure.java:93) > at > org.apache.hadoop.hbase.master.procedure.DeleteColumnFamilyProcedure.executeFromState(DeleteColumnFamilyProcedure.java:48) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:465) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707026#comment-15707026 ] Enis Soztutar commented on HBASE-17151: --- How about doing a static CacheConfig.DISABLED, and use it when creating a reader. Doing null checks everywhere unnecessarily complicates the code. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17128) Find Cause of a Write Perf Regression in branch-1.2
[ https://issues.apache.org/jira/browse/HBASE-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706682#comment-15706682 ] Graham Baecher commented on HBASE-17128: I re-ran workload A today against clusters that were running CDH 5.8 and CDH 5.9 with the default garbage collector instead of G1GC. The regression disappeared and CDH 5.9 performed significantly better than 5.8 overall, matching what Appy found above. I can indeed patch our HBase, stack. I'll try applying the patches from HBASE-16616, HBASE-16146, and HBASE-17072, re-enabling G1GC on the RegionServers, and seeing if performance is still good. > Find Cause of a Write Perf Regression in branch-1.2 > --- > > Key: HBASE-17128 > URL: https://issues.apache.org/jira/browse/HBASE-17128 > Project: HBase > Issue Type: Task >Reporter: stack > > As reported by [~gbaecher] up on the mailing list, there is a regression in > 1.2. The regression is in a CDH version of 1.2 actually but the CDH hbase is > a near pure 1.2. This is a working issue to figure which of the below changes > brought on slower writes (The list comes from doing the following...git log > --oneline > remotes/origin/cdh5-1.2.0_5.8.0_dev..remotes/origin/cdh5-1.2.0_5.9.0_dev ... > I stripped the few CDH specific changes, packaging and tagging only, and then > made two groupings; candidates and the unlikelies): > {code} > 1 bbc6762 HBASE-16023 Fastpath for the FIFO rpcscheduler Adds an executor > that does balanced queue and fast path handing off requests directly to > waiting handlers if any present. Idea taken from Apace Kudu (incubating). See > https://gerr# > 2 a260917 HBASE-16288 HFile intermediate block level indexes might recurse > forever creating multi TB files > 3 5633281 HBASE-15811 Batch Get after batch Put does not fetch all Cells We > were not waiting on all executors in a batch to complete. The test for > no-more-executors was damaged by the 0.99/0.98.4 fix "HBASE-11403 Fix race > conditions aro# > 4 780f720 HBASE-11625 - Verifies data before building HFileBlock. - Adds > HFileBlock.Header class which contains information about location of fields. > Testing: Adds CorruptedFSReaderImpl to TestChecksum. (Apekshit) > 5 d735680 HBASE-12133 Add FastLongHistogram for metric computation (Yi Deng) > 6 c4ee832 HBASE-15222 Use less contended classes for metrics > 7 > 8 17320a4 HBASE-15683 Min latency in latency histograms are emitted as > Long.MAX_VALUE > 9 283b39f HBASE-15396 Enhance mapreduce.TableSplit to add encoded region > name > 10 39db592 HBASE-16195 Should not add chunk into chunkQueue if not using > chunk pool in HeapMemStoreLAB > 11 5ff28b7 HBASE-16194 Should count in MSLAB chunk allocation into heap size > change when adding duplicate cells > 12 5e3e0d2 HBASE-16318 fail build while rendering velocity template if > dependency license isn't in whitelist. > 13 3ed66e3 HBASE-16318 consistently use the correct name for 'Apache > License, Version 2.0' > 14 351832d HBASE-16340 exclude Xerces iplementation jars from coming in > transitively. > 15 b6aa4be HBASE-16321 ensure no findbugs-jsr305 > 16 4f9dde7 HBASE-16317 revert all ESAPI changes > 17 71b6a8a HBASE-16284 Unauthorized client can shutdown the cluster (Deokwoo > Han) > 18 523753f HBASE-16450 Shell tool to dump replication queues > 19 ca5f2ee HBASE-16379 [replication] Minor improvement to > replication/copy_tables_desc.rb > 20 effd105 HBASE-16135 PeerClusterZnode under rs of removed peer may never > be deleted > 21 a5c6610 HBASE-16319 Fix TestCacheOnWrite after HBASE-16288 > 22 1956bb0 HBASE-15808 Reduce potential bulk load intermediate space usage > and waste > 23 031c54e HBASE-16096 Backport. Cleanly remove replication peers from > ZooKeeper. > 24 60a3b12 HBASE-14963 Remove use of Guava Stopwatch from HBase client code > (Devaraj Das) > 25 c7724fc HBASE-16207 can't restore snapshot without "Admin" permission > 26 8322a0b HBASE-16227 [Shell] Column value formatter not working in scans. > Tested : manually using shell. > 27 8f86658 HBASE-14818 user_permission does not list namespace permissions > (li xiang) > 28 775cd21 HBASE-15465 userPermission returned by getUserPermission() for > the selected namespace does not have namespace set (li xiang) > 29 8d85aff HBASE-16093 Fix splits failed before creating daughter regions > leave meta inconsistent > 30 bc41317 HBASE-16140 bump owasp.esapi from 2.1.0 to 2.1.0.1 > 31 6fc70cd HBASE-16035 Nested AutoCloseables might not all get closed (Sean > Mackrory) > 32 fe28fe84 HBASE-15891. Closeable resources potentially not getting closed > if exception is thrown. > 33 1d2bf3c HBASE-14644 Region in transition metric is broken -- addendum > (Huaxiang Sun) > 34 fd5f56c HBASE-16056 Procedure v2 - fix master crash for
[jira] [Updated] (HBASE-16941) FavoredNodes - Split/Merge code paths
[ https://issues.apache.org/jira/browse/HBASE-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16941: - Attachment: HBASE-16941.master.009.patch > FavoredNodes - Split/Merge code paths > - > > Key: HBASE-16941 > URL: https://issues.apache.org/jira/browse/HBASE-16941 > Project: HBase > Issue Type: Sub-task >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0 > > Attachments: HBASE-16941.master.001.patch, > HBASE-16941.master.002.patch, HBASE-16941.master.003.patch, > HBASE-16941.master.004.patch, HBASE-16941.master.005.patch, > HBASE-16941.master.006.patch, HBASE-16941.master.007.patch, > HBASE-16941.master.008.patch, HBASE-16941.master.009.patch > > > This jira is to deal with the split/merge logic discussed as part of > HBASE-15532. The design document can be seen at HBASE-15531. The specific > changes are: > Split and merged regions should inherit favored node information from parent > regions. For splits also include some randomness so even if there are > subsequent splits, the regions will be more or less distributed. For split, > we include 2 FN from the parent and generate one random node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17201) Edit of HFileBlock comments and javadoc
[ https://issues.apache.org/jira/browse/HBASE-17201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-17201: -- Affects Version/s: 2.0.0 Status: Patch Available (was: Open) > Edit of HFileBlock comments and javadoc > --- > > Key: HBASE-17201 > URL: https://issues.apache.org/jira/browse/HBASE-17201 > Project: HBase > Issue Type: Sub-task > Components: documentation >Affects Versions: 2.0.0 >Reporter: stack >Assignee: stack > Fix For: 2.0.0 > > Attachments: HBASE-17201.master.001.patch > > > Spent time in HFileBlock trying to do the parent issue. Failed. But did a > bunch of edits of comments/javadoc. Let me get those in at least. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17151: -- Attachment: HBASE-17151-v2.patch v2. Added new API call to HFile to create reader w/o cache configuration instance. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706700#comment-15706700 ] Vladimir Rodionov edited comment on HBASE-17151 at 11/29/16 10:11 PM: -- v2. Added new API call to HFile to create reader w/o cache configuration instance. {code} /** * Creates reader w/o cache being involved * @param fs filesystem * @param path Path to file to read * @return an active Reader instance * @throws IOException Will throw a CorruptHFileException (DoNotRetryIOException subtype) if hfile is corrupt/invalid. */ public static Reader createReader( FileSystem fs, Path path, Configuration conf) throws IOException { FSDataInputStreamWrapper stream = new FSDataInputStreamWrapper(fs, path); return pickReaderVersion(path, stream, fs.getFileStatus(path).getLen(), null, stream.getHfs(), conf); } {code} was (Author: vrodionov): v2. Added new API call to HFile to create reader w/o cache configuration instance. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17151: -- Fix Version/s: 2.0.0 > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17199) Back-port HBASE-17151 to HBASE-7912 branch
[ https://issues.apache.org/jira/browse/HBASE-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706840#comment-15706840 ] Hadoop QA commented on HBASE-17199: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s {color} | {color:red} HBASE-17199 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840945/HBASE-17199-v1.patch | | JIRA Issue | HBASE-17199 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4693/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Back-port HBASE-17151 to HBASE-7912 branch > -- > > Key: HBASE-17199 > URL: https://issues.apache.org/jira/browse/HBASE-17199 > Project: HBase > Issue Type: Task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: HBASE-17199-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17200) Document an interesting implication of HBASE-15212
Andrew Purtell created HBASE-17200: -- Summary: Document an interesting implication of HBASE-15212 Key: HBASE-17200 URL: https://issues.apache.org/jira/browse/HBASE-17200 Project: HBase Issue Type: Bug Components: documentation, Operability, Replication Reporter: Andrew Purtell Priority: Minor We had a Phoenix client application unfortunately batching up 1000 rows at a time. Phoenix bundles mutations up only considering the row count not byte count (see PHOENIX-541) so this lead to some *single WALEdits* in excess of 600 MB. A cluster without max RPC size enforcement accepted them. (That may be something we should fix - WALEdits that large are crazy.) Then replication workers attempting to ship the monster edits from this cluster to a remote cluster recently upgraded with RPC size enforcement active would see all their RPC attempts rejected, because the default limit is 256 MB. This is an edge case but I can see it happening in practice and taking users by surprise, most likely when replicating between mixed versions. We should document this in the troubleshooting section. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17151) New API to create HFile.Reader without instantiating block cache
[ https://issues.apache.org/jira/browse/HBASE-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15707051#comment-15707051 ] Vladimir Rodionov commented on HBASE-17151: --- {quote} How about doing a static CacheConfig.DISABLED, and use it when creating a reader. Doing null checks everywhere unnecessarily complicates the code. {quote} OK. > New API to create HFile.Reader without instantiating block cache > - > > Key: HBASE-17151 > URL: https://issues.apache.org/jira/browse/HBASE-17151 > Project: HBase > Issue Type: New Feature >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-17151-v1.patch, HBASE-17151-v2.patch > > > Currently, to create HFile.Reader instance, the CacheConfig instance is > required (which instantiates block cache). We need API for reading HFile w/o > block cache being involved. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17128) Find Cause of a Write Perf Regression in branch-1.2
[ https://issues.apache.org/jira/browse/HBASE-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706747#comment-15706747 ] Appy commented on HBASE-17128: -- Nice, that looks good [~gbaecher], keep us posted. Btw, here's a copy of doc mentioned above from my personal account so that it's publicly visible (https://docs.google.com/document/d/1DDWiA0ZVYRpvLk-uImDajXbG1d_XUN-xhC1CKBOcd8g/edit?usp=sharing). > Find Cause of a Write Perf Regression in branch-1.2 > --- > > Key: HBASE-17128 > URL: https://issues.apache.org/jira/browse/HBASE-17128 > Project: HBase > Issue Type: Task >Reporter: stack > > As reported by [~gbaecher] up on the mailing list, there is a regression in > 1.2. The regression is in a CDH version of 1.2 actually but the CDH hbase is > a near pure 1.2. This is a working issue to figure which of the below changes > brought on slower writes (The list comes from doing the following...git log > --oneline > remotes/origin/cdh5-1.2.0_5.8.0_dev..remotes/origin/cdh5-1.2.0_5.9.0_dev ... > I stripped the few CDH specific changes, packaging and tagging only, and then > made two groupings; candidates and the unlikelies): > {code} > 1 bbc6762 HBASE-16023 Fastpath for the FIFO rpcscheduler Adds an executor > that does balanced queue and fast path handing off requests directly to > waiting handlers if any present. Idea taken from Apace Kudu (incubating). See > https://gerr# > 2 a260917 HBASE-16288 HFile intermediate block level indexes might recurse > forever creating multi TB files > 3 5633281 HBASE-15811 Batch Get after batch Put does not fetch all Cells We > were not waiting on all executors in a batch to complete. The test for > no-more-executors was damaged by the 0.99/0.98.4 fix "HBASE-11403 Fix race > conditions aro# > 4 780f720 HBASE-11625 - Verifies data before building HFileBlock. - Adds > HFileBlock.Header class which contains information about location of fields. > Testing: Adds CorruptedFSReaderImpl to TestChecksum. (Apekshit) > 5 d735680 HBASE-12133 Add FastLongHistogram for metric computation (Yi Deng) > 6 c4ee832 HBASE-15222 Use less contended classes for metrics > 7 > 8 17320a4 HBASE-15683 Min latency in latency histograms are emitted as > Long.MAX_VALUE > 9 283b39f HBASE-15396 Enhance mapreduce.TableSplit to add encoded region > name > 10 39db592 HBASE-16195 Should not add chunk into chunkQueue if not using > chunk pool in HeapMemStoreLAB > 11 5ff28b7 HBASE-16194 Should count in MSLAB chunk allocation into heap size > change when adding duplicate cells > 12 5e3e0d2 HBASE-16318 fail build while rendering velocity template if > dependency license isn't in whitelist. > 13 3ed66e3 HBASE-16318 consistently use the correct name for 'Apache > License, Version 2.0' > 14 351832d HBASE-16340 exclude Xerces iplementation jars from coming in > transitively. > 15 b6aa4be HBASE-16321 ensure no findbugs-jsr305 > 16 4f9dde7 HBASE-16317 revert all ESAPI changes > 17 71b6a8a HBASE-16284 Unauthorized client can shutdown the cluster (Deokwoo > Han) > 18 523753f HBASE-16450 Shell tool to dump replication queues > 19 ca5f2ee HBASE-16379 [replication] Minor improvement to > replication/copy_tables_desc.rb > 20 effd105 HBASE-16135 PeerClusterZnode under rs of removed peer may never > be deleted > 21 a5c6610 HBASE-16319 Fix TestCacheOnWrite after HBASE-16288 > 22 1956bb0 HBASE-15808 Reduce potential bulk load intermediate space usage > and waste > 23 031c54e HBASE-16096 Backport. Cleanly remove replication peers from > ZooKeeper. > 24 60a3b12 HBASE-14963 Remove use of Guava Stopwatch from HBase client code > (Devaraj Das) > 25 c7724fc HBASE-16207 can't restore snapshot without "Admin" permission > 26 8322a0b HBASE-16227 [Shell] Column value formatter not working in scans. > Tested : manually using shell. > 27 8f86658 HBASE-14818 user_permission does not list namespace permissions > (li xiang) > 28 775cd21 HBASE-15465 userPermission returned by getUserPermission() for > the selected namespace does not have namespace set (li xiang) > 29 8d85aff HBASE-16093 Fix splits failed before creating daughter regions > leave meta inconsistent > 30 bc41317 HBASE-16140 bump owasp.esapi from 2.1.0 to 2.1.0.1 > 31 6fc70cd HBASE-16035 Nested AutoCloseables might not all get closed (Sean > Mackrory) > 32 fe28fe84 HBASE-15891. Closeable resources potentially not getting closed > if exception is thrown. > 33 1d2bf3c HBASE-14644 Region in transition metric is broken -- addendum > (Huaxiang Sun) > 34 fd5f56c HBASE-16056 Procedure v2 - fix master crash for FileNotFound > 35 10cd038 HBASE-16034 Fix ProcedureTestingUtility#LoadCounter.setMaxProcId() > 36 dae4db4 HBASE-15872 Split TestWALProcedureStore > 37 e638d86 HBASE-14644 Region in transition metric is broken (Huaxiang Sun) >
[jira] [Commented] (HBASE-17196) deleted mob cell can come back after major compaction and minor mob compaction
[ https://issues.apache.org/jira/browse/HBASE-17196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706800#comment-15706800 ] huaxiang sun commented on HBASE-17196: -- I rechecked the code again. I think my testing is wrong as I removed these _del files before doing minor mob compaction. The logic in code will always include _del files if a file needs to be compacted so those deleted cells will never come back. I am resolving the issue as invalid. > deleted mob cell can come back after major compaction and minor mob compaction > -- > > Key: HBASE-17196 > URL: https://issues.apache.org/jira/browse/HBASE-17196 > Project: HBase > Issue Type: Bug > Components: mob >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun > > In the following case, the deleted mob cell can come back. > {code} > 1) hbase(main):001:0> create 't1', {NAME => 'f1', IS_MOB => true, > MOB_THRESHOLD => 10} > 2) hbase(main):002:0> put 't1', 'r1', 'f1:q1', '' > 3) hbase(main):003:0> flush 't1' > 4) hbase(main):004:0> deleteall 't1', 'r1' > 5) hbase(main):005:0> scan 't1' > ROW COLUMN+CELL > > > 0 row(s) > 6) hbase(main):006:0> flush 't1' > 7) hbase(main):007:0> major_compact 't1' > After that, go to mobdir, remove the _del file, this is to simulate the case > that mob minor compaction does not the _del file. Right now, the cell in > normal region is gone after the major compaction. > 8) hbase(main):008:0> put 't1', 'r2', 'f1:q1', '' > > > 9) hbase(main):009:0> flush 't1' > 10) hbase(main):010:0> scan 't1' > ROW COLUMN+CELL > > > r2 column=f1:q1, > timestamp=1480451201393, value= > > 1 row(s) > 11) hbase(main):011:0> compact 't1', 'f1', 'MOB' > 12) hbase(main):012:0> scan 't1' > ROW COLUMN+CELL > > > r1 column=f1:q1, > timestamp=1480450987725, value= > > r2 column=f1:q1, > timestamp=1480451201393, value= > > 2 row(s) > The deleted "r1" comes back. The reason is that mob minor compaction does not > include _del files so it generates references for the deleted cell. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-17196) deleted mob cell can come back after major compaction and minor mob compaction
[ https://issues.apache.org/jira/browse/HBASE-17196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun resolved HBASE-17196. -- Resolution: Invalid As explained in the comments. > deleted mob cell can come back after major compaction and minor mob compaction > -- > > Key: HBASE-17196 > URL: https://issues.apache.org/jira/browse/HBASE-17196 > Project: HBase > Issue Type: Bug > Components: mob >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun > > In the following case, the deleted mob cell can come back. > {code} > 1) hbase(main):001:0> create 't1', {NAME => 'f1', IS_MOB => true, > MOB_THRESHOLD => 10} > 2) hbase(main):002:0> put 't1', 'r1', 'f1:q1', '' > 3) hbase(main):003:0> flush 't1' > 4) hbase(main):004:0> deleteall 't1', 'r1' > 5) hbase(main):005:0> scan 't1' > ROW COLUMN+CELL > > > 0 row(s) > 6) hbase(main):006:0> flush 't1' > 7) hbase(main):007:0> major_compact 't1' > After that, go to mobdir, remove the _del file, this is to simulate the case > that mob minor compaction does not the _del file. Right now, the cell in > normal region is gone after the major compaction. > 8) hbase(main):008:0> put 't1', 'r2', 'f1:q1', '' > > > 9) hbase(main):009:0> flush 't1' > 10) hbase(main):010:0> scan 't1' > ROW COLUMN+CELL > > > r2 column=f1:q1, > timestamp=1480451201393, value= > > 1 row(s) > 11) hbase(main):011:0> compact 't1', 'f1', 'MOB' > 12) hbase(main):012:0> scan 't1' > ROW COLUMN+CELL > > > r1 column=f1:q1, > timestamp=1480450987725, value= > > r2 column=f1:q1, > timestamp=1480451201393, value= > > 2 row(s) > The deleted "r1" comes back. The reason is that mob minor compaction does not > include _del files so it generates references for the deleted cell. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15704) Refactoring: Move HFileArchiver from backup to its own package
[ https://issues.apache.org/jira/browse/HBASE-15704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-15704: -- Attachment: HBASE-15704-v3.patch v3. Removes example package and renames (moves) HFileArchiver. cc: [~enis] > Refactoring: Move HFileArchiver from backup to its own package > -- > > Key: HBASE-15704 > URL: https://issues.apache.org/jira/browse/HBASE-15704 > Project: HBase > Issue Type: Task >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Fix For: 2.0.0 > > Attachments: HBASE-15704-v2.patch, HBASE-15704-v3.patch > > > This class is in backup package (as well as backup/examples classes) but is > not backup - related. Move examples classes to hbase-examples package. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17196) deleted mob cell can come back after major compaction and minor mob compaction
huaxiang sun created HBASE-17196: Summary: deleted mob cell can come back after major compaction and minor mob compaction Key: HBASE-17196 URL: https://issues.apache.org/jira/browse/HBASE-17196 Project: HBase Issue Type: Bug Components: mob Affects Versions: 2.0.0 Reporter: huaxiang sun Assignee: huaxiang sun In the following case, the deleted mob cell can come back. {code} 1) hbase(main):001:0> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 10} 2) hbase(main):002:0> put 't1', 'r1', 'f1:q1', '' 3) hbase(main):003:0> flush 't1' 4) hbase(main):004:0> deleteall 't1', 'r1' 5) hbase(main):005:0> scan 't1' ROW COLUMN+CELL 0 row(s) 6) hbase(main):006:0> flush 't1' 7) hbase(main):007:0> major_compact 't1' After that, go to mobdir, remove the _del file, this is to simulate the case that mob minor compaction does not the _del file. Right now, the cell in normal region is gone after the major compaction. 8) hbase(main):008:0> put 't1', 'r2', 'f1:q1', '' 9) hbase(main):009:0> flush 't1' 10) hbase(main):010:0> scan 't1' ROW COLUMN+CELL r2 column=f1:q1, timestamp=1480451201393, value= 1 row(s) 11) hbase(main):011:0> compact 't1', 'f1', 'MOB' 12) hbase(main):012:0> scan 't1' ROW COLUMN+CELL r1 column=f1:q1, timestamp=1480450987725, value= r2 column=f1:q1, timestamp=1480451201393, value= 2 row(s) The deleted "r1" comes back. The reason is that mob minor compaction does not include _del files so it generates references for the deleted cell. {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16904) remove directory layout / fs references from snapshots
[ https://issues.apache.org/jira/browse/HBASE-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706658#comment-15706658 ] Hadoop QA commented on HBASE-16904: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 58s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 28 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 44s {color} | {color:green} hbase-14439 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 51s {color} | {color:green} hbase-14439 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 27s {color} | {color:green} hbase-14439 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 20s {color} | {color:green} hbase-14439 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 39s {color} | {color:red} hbase-server in hbase-14439 has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s {color} | {color:green} hbase-14439 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 24m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s {color} | {color:red} hbase-server generated 14 new + 8 unchanged - 0 fixed = 22 total (was 8) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 36s {color} | {color:red} root generated 14 new + 27 unchanged - 0 fixed = 41 total (was 27) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 7s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 47s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 53s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.regionserver.TestSplitTransaction | | | hadoop.hbase.fs.legacy.snapshot.TestRestoreSnapshotHelper | | | hadoop.hbase.master.snapshot.TestSnapshotManager | | | hadoop.hbase.regionserver.TestDateTieredCompactionPolicyOverflow | | | hadoop.hbase.fs.legacy.snapshot.TestSnapshotHFileCleaner | | | hadoop.hbase.fs.legacy.snapshot.TestMobRestoreSnapshotHelper | | | hadoop.hbase.regionserver.TestDateTieredCompactionPolicy | | |
[jira] [Updated] (HBASE-17146) Reconsider rpc timeout calculation in backup restore operation
[ https://issues.apache.org/jira/browse/HBASE-17146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17146: -- Priority: Trivial (was: Major) > Reconsider rpc timeout calculation in backup restore operation > --- > > Key: HBASE-17146 > URL: https://issues.apache.org/jira/browse/HBASE-17146 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Trivial > Attachments: HBASE-17146-v1.patch > > > We calculate rpc timeout by multiplying # of regions by single_file_timeout > (60 sec) > For big tables this may become very large timeout. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17147) Reduce logging in BackupLogCleaner
[ https://issues.apache.org/jira/browse/HBASE-17147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17147: -- Priority: Minor (was: Major) > Reduce logging in BackupLogCleaner > -- > > Key: HBASE-17147 > URL: https://issues.apache.org/jira/browse/HBASE-17147 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Minor > Attachments: HBASE-17147-v1.patch > > > Every minute log cleaner logs the following: > {quote} > 2016-11-21 11:23:09,565 DEBUG > [ve0524.halxg.cloudera.com,16000,1479750005565_ChoreService_1] > impl.BackupSystemTable: Has backup sessions from hbase:backup > 2016-11-21 11:23:09,567 DEBUG [hconnection-0x4e9c4023-shared-pool11-t85] > ipc.RpcConnection: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-21 11:23:09,568 DEBUG [hconnection-0x4e9c4023-shared-pool11-t85] > ipc.NettyRpcConnection: Connecting to > ve0528.halxg.cloudera.com/10.17.240.22:16020 > 2016-11-21 11:23:09,575 DEBUG [hconnection-0x4e9c4023-shared-pool11-t86] > ipc.RpcConnection: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-21 11:23:09,576 DEBUG [hconnection-0x4e9c4023-shared-pool11-t86] > ipc.NettyRpcConnection: Connecting to > ve0528.halxg.cloudera.com/10.17.240.22:16020 > 2016-11-21 11:23:09,579 DEBUG > [ve0524.halxg.cloudera.com,16000,1479750005565_ChoreService_1] > ipc.RpcConnection: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-21 11:23:09,579 DEBUG > [ve0524.halxg.cloudera.com,16000,1479750005565_ChoreService_1] > ipc.NettyRpcConnection: Connecting to > ve0528.halxg.cloudera.com/10.17.240.22:16020 > 2016-11-21 11:23:09,581 DEBUG > [ve0524.halxg.cloudera.com,16000,1479750005565_ChoreService_1] > master.BackupLogCleaner: BackupLogCleaner has no backup sessions > 2016-11-21 11:23:09,760 DEBUG [Default-IPC-NioEventLoopGroup-1-51] > ipc.NettyRpcDuplexHandler: shutdown connection to > ve0528.halxg.cloudera.com/10.17.240.22:16020 because idle for a long time > 2016-11-21 11:23:09,775 DEBUG [Default-IPC-NioEventLoopGroup-1-52] > ipc.NettyRpcDuplexHandler: shutdown connection to > ve0528.halxg.cloudera.com/10.17.240.22:16020 because idle for a long time > 2016-11-21 11:23:09,778 DEBUG [Default-IPC-NioEventLoopGroup-1-53] > ipc.NettyRpcDuplexHandler: shutdown connection to > ve0528.halxg.cloudera.com/10.17.240.22:16020 because idle for a long time > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-17197) hfile does not work in 2.0
[ https://issues.apache.org/jira/browse/HBASE-17197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] huaxiang sun resolved HBASE-17197. -- Resolution: Invalid Missing -f for the file, do not know why I forgot that. > hfile does not work in 2.0 > -- > > Key: HBASE-17197 > URL: https://issues.apache.org/jira/browse/HBASE-17197 > Project: HBase > Issue Type: Bug > Components: HFile >Affects Versions: 2.0.0 >Reporter: huaxiang sun >Assignee: huaxiang sun > > I tried to use hfile in master branch, it does not print out kv pairs or meta > as it is supposed to be. > {code} > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile > file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/ > 53e9f9bc328f468b87831221de3a0c74 bdc6e1c4eea246a99e989e02d554cb03 > bf9275ac418d4d458904d81137e82683 > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile > file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/bf9275ac418d4d458904d81137e82683 > -m > 2016-11-29 12:25:22,019 WARN [main] util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile > file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/bf9275ac418d4d458904d81137e82683 > -p > 2016-11-29 12:25:27,333 WARN [main] util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > Scanned kv count -> 0 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17012) Handle Offheap cells in CompressedKvEncoder
[ https://issues.apache.org/jira/browse/HBASE-17012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706396#comment-15706396 ] Hudson commented on HBASE-17012: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2042 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2042/]) HBASE-17012 Handle Offheap cells in CompressedKvEncoder (Ram) (ramkrishna: rev 7c43a23c07d2af1c236b3153ba932234c3a80d13) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestRpcServer.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/io/TagCompressionContext.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/SecureWALCellCodec.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestWALCellCodecWithCompression.java * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestWALReaderOnSecureWAL.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/Dictionary.java > Handle Offheap cells in CompressedKvEncoder > --- > > Key: HBASE-17012 > URL: https://issues.apache.org/jira/browse/HBASE-17012 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Affects Versions: 2.0.0 >Reporter: Anoop Sam John >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-17012_1.patch, HBASE-17012_2.patch, > HBASE-17012_3.patch, HBASE-17012_4.patch, HBASE-17012_5.patch, > HBASE-17012_6.patch > > > When we deal with off heap cells we will end up copying Cell components on > heap > {code} > public void write(Cell cell) throws IOException { > . > write(cell.getRowArray(), cell.getRowOffset(), cell.getRowLength(), > compression.rowDict); > write(cell.getFamilyArray(), cell.getFamilyOffset(), > cell.getFamilyLength(), > compression.familyDict); > write(cell.getQualifierArray(), cell.getQualifierOffset(), > cell.getQualifierLength(), > compression.qualifierDict); > .. > out.write(cell.getValueArray(), cell.getValueOffset(), > cell.getValueLength()); > ... > {code} > We need to avoid this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16904) remove directory layout / fs references from snapshots
[ https://issues.apache.org/jira/browse/HBASE-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-16904: Attachment: HBASE-16904-hbase-14439.v4.patch -04 - fix new findbugs warning. > remove directory layout / fs references from snapshots > -- > > Key: HBASE-16904 > URL: https://issues.apache.org/jira/browse/HBASE-16904 > Project: HBase > Issue Type: Sub-task > Components: Filesystem Integration >Reporter: Sean Busbey >Assignee: Umesh Agashe > Attachments: HBASE-16904-hbase-14439.v1.patch, > HBASE-16904-hbase-14439.v2.patch, HBASE-16904-hbase-14439.v3.patch, > HBASE-16904-hbase-14439.v4.patch > > > ensure snapshot code works through the MasterStorage / RegionStorage APIs and > not directly on backing filesystem. > {code} > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/ExportSnapshot.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/RestoreSnapshotHelper.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotDescriptionUtils.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotInfo.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifest.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV1.java > hbase-server/src/main/java/org/apache/hadoop/hbase/snapshot/SnapshotManifestV2.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/DisabledTableSnapshotHandler.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotFileCache.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotHFileCleaner.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java > hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/TakeSnapshotHandler.java > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17197) hfile does not work in 2.0
huaxiang sun created HBASE-17197: Summary: hfile does not work in 2.0 Key: HBASE-17197 URL: https://issues.apache.org/jira/browse/HBASE-17197 Project: HBase Issue Type: Bug Components: HFile Affects Versions: 2.0.0 Reporter: huaxiang sun Assignee: huaxiang sun I tried to use hfile in master branch, it does not print out kv pairs or meta as it is supposed to be. {code} hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/ 53e9f9bc328f468b87831221de3a0c74 bdc6e1c4eea246a99e989e02d554cb03 bf9275ac418d4d458904d81137e82683 hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/bf9275ac418d4d458904d81137e82683 -m 2016-11-29 12:25:22,019 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable hsun-MBP:hbase-2.0.0-SNAPSHOT hsun$ hbase hfile file:///Users/hsun/work/local-hbase-cluster/data/data/default/t1/755b5d7a44148492b7138c79c5e4f39f/f1/bf9275ac418d4d458904d81137e82683 -p 2016-11-29 12:25:27,333 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Scanned kv count -> 0 {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17155) Unify table list output across backup/ set commands
[ https://issues.apache.org/jira/browse/HBASE-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706315#comment-15706315 ] Hadoop QA commented on HBASE-17155: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s {color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} | {color:red} HBASE-17155 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840912/HBASE-17155-v1.patch | | JIRA Issue | HBASE-17155 | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4688/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Unify table list output across backup/ set commands > --- > > Key: HBASE-17155 > URL: https://issues.apache.org/jira/browse/HBASE-17155 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Minor > Attachments: HBASE-17155-v1.patch > > > Would be good to unify the output format of table list: x={ycsb,x_1} across > all backup command line tools -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17155) Unify table list output across backup/ set commands
[ https://issues.apache.org/jira/browse/HBASE-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706374#comment-15706374 ] Enis Soztutar commented on HBASE-17155: --- +1. Do you want me to commit to the branch? > Unify table list output across backup/ set commands > --- > > Key: HBASE-17155 > URL: https://issues.apache.org/jira/browse/HBASE-17155 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Minor > Attachments: HBASE-17155-v1.patch > > > Would be good to unify the output format of table list: x={ycsb,x_1} across > all backup command line tools -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17155) Unify table list output across backup/ set commands
[ https://issues.apache.org/jira/browse/HBASE-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17155: -- Description: Would be good to unify the output format of table list: x={ycsb,x_1} across all backup command line tools (was: Would be good to unify the output format of table list: x={ycsb,x_1} across all command line tools) > Unify table list output across backup/ set commands > --- > > Key: HBASE-17155 > URL: https://issues.apache.org/jira/browse/HBASE-17155 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Minor > > Would be good to unify the output format of table list: x={ycsb,x_1} across > all backup command line tools -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17155) Unify table list output across backup/ set commands
[ https://issues.apache.org/jira/browse/HBASE-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17155: -- Status: Patch Available (was: Open) > Unify table list output across backup/ set commands > --- > > Key: HBASE-17155 > URL: https://issues.apache.org/jira/browse/HBASE-17155 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Minor > Attachments: HBASE-17155-v1.patch > > > Would be good to unify the output format of table list: x={ycsb,x_1} across > all backup command line tools -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17155) Unify table list output across backup/ set commands
[ https://issues.apache.org/jira/browse/HBASE-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-17155: -- Attachment: HBASE-17155-v1.patch Small fix in BackupInfo. v1 cc: [~enis], [~tedyu] > Unify table list output across backup/ set commands > --- > > Key: HBASE-17155 > URL: https://issues.apache.org/jira/browse/HBASE-17155 > Project: HBase > Issue Type: Bug >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov >Priority: Minor > Attachments: HBASE-17155-v1.patch > > > Would be good to unify the output format of table list: x={ycsb,x_1} across > all backup command line tools -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17187) DoNotRetryExceptions from coprocessors should bubble up to the application
[ https://issues.apache.org/jira/browse/HBASE-17187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706368#comment-15706368 ] Enis Soztutar commented on HBASE-17187: --- bq. So addition of this ... means we will not throw back ScannerResetException now? We will throw the exception as it is if it is already a DNRIOE. Otherwise we throw the exception as ScannerReset or UnknownScanner below. bq. So which all IO exception will come under and converted to ScannerResetException now? All of the IOExceptions coming from deeper layers are converted to ScannerResetExceptions. Previous behavior before HBASE-16604 was that we will throw it back to the Scanner, and the scan RPC (callable) will retry when getting this exception. The scanner was not closed so leaving the scan state in the server possibly dirty. With HBASE-16604 and this patch, we are making it so that upon getting IOE, we throw ScannerReset, which closes the scanner and the RPC is not retried (Callable), however the client scanner will still re-open another scanner and continue from where ever we are left. So end-to-end behavior is not changed I think, however we are retrying at the different layer (RPC retry versus re-opening region scanner). > DoNotRetryExceptions from coprocessors should bubble up to the application > -- > > Key: HBASE-17187 > URL: https://issues.apache.org/jira/browse/HBASE-17187 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Attachments: hbase-17187_v1.patch > > > In HBASE-16604, we fixed a case where scanner retries was causing the scan to > miss some data in case the scanner is left with a dirty state (like a > half-seeked KVHeap). > The patch introduced a minor compatibility issue, because now if a > coprocessor throws DNRIOE, we still retry the ClientScanner indefinitely. > The test {{ServerExceptionIT}} in Phoenix is failing because of this with > HBASE-16604. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17194) Assign the new region to the idle server after splitting
[ https://issues.apache.org/jira/browse/HBASE-17194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706392#comment-15706392 ] Hadoop QA commented on HBASE-17194: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 9s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 43s {color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 139m 39s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840897/HBASE-17194.v0.patch | | JIRA Issue | HBASE-17194 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux fab4470f1f60 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 7c43a23 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4686/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4686/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Assign the new region to the idle server after splitting > > > Key: HBASE-17194 > URL: https://issues.apache.org/jira/browse/HBASE-17194 > Project: HBase > Issue Type: Improvement >Affects Versions: 2.0.0 >Reporter: ChiaPing Tsai >Assignee: ChiaPing Tsai >Priority: Minor > Fix For: 2.0.0 > > Attachments:
[jira] [Commented] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15706397#comment-15706397 ] Hudson commented on HBASE-17192: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2042 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2042/]) HBASE-17192 remove use of scala-tools.org as repo. (busbey: rev e5dad24a9cb35d831a0ee1cf0eeb14b3719ab7ef) * (edit) pom.xml > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17192.1.patch, HBASE-17192.1.test.patch > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17198) FN updates during region merge (follow up to Procedure v2 merge)
Thiruvel Thirumoolan created HBASE-17198: Summary: FN updates during region merge (follow up to Procedure v2 merge) Key: HBASE-17198 URL: https://issues.apache.org/jira/browse/HBASE-17198 Project: HBase Issue Type: Sub-task Reporter: Thiruvel Thirumoolan Assignee: Thiruvel Thirumoolan As mentioned in https://reviews.apache.org/r/53242/ (HBASE-16941), since the procedure v2 merge changes are in development, there is a follow up optimization/cleanup that can be done for favored nodes during merge. This jira will be taken up once HBASE-16119 is complete. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17192) remove use of scala-tools.org from pom
Sean Busbey created HBASE-17192: --- Summary: remove use of scala-tools.org from pom Key: HBASE-17192 URL: https://issues.apache.org/jira/browse/HBASE-17192 Project: HBase Issue Type: Bug Components: spark, website Affects Versions: 2.0.0 Reporter: Sean Busbey Assignee: Sean Busbey Priority: Blocker Fix For: 2.0.0 our pom makes use of scala-tools.org for a repository. That domain currently issues redirects for all URLs; for maven coordinates those redirects lead to 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the local maven repository in a way that cause the mvn:site goal to give an opaque error: {code} [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 01:46 min [INFO] Finished at: 2016-11-28T14:17:10+00:00 [INFO] Final Memory: 292M/6583M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project hbase: Execution default-site of goal org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] [ERROR] {code} Rerunning in debug mode with {{mvn -X}} gives no additional useful information. All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704860#comment-15704860 ] Sean Busbey commented on HBASE-17192: - For the curious on how I found this error: # Searching for the odd {{null:null:null:jar}} string led to [a stack overflow question that suggested clearing maven repository would fix|http://stackoverflow.com/questions/21183418/i-cant-solve-maven-building-error-failure]. it didn't in this case. It also led to another [stackoverlow question that suggested a corrupt pom in the maven repository would give the same error|http://stackoverflow.com/questions/13648472/failure-in-maven-site-plugin-version-3] # After clearing my local repository so that only things downloaded in the {{mvn install && mvn site}} would be present, I looked at first lines that weren't xml declarations and one line stood out. {code} $ find ~/.m2/repository/ -name '*.pom' -exec head -n 1 {} \; | grep -v "https://blog.goodstuff.im/repo-releases/org/apache/maven/plugins/maven-assembly-plugin/2.4/maven-assembly-plugin-2.4.pom;>Moved Permanently. … {code} # Grepping for that moved string showed the impacted pom {code} $ find ~/.m2/repository/ -name '*.pom' -exec grep -l "Moved Permanently" {} \; /Users/busbey/.m2/repository//org/apache/maven/plugins/maven-assembly-plugin/2.4/maven-assembly-plugin-2.4.pom {code} # That's a core maven plugin, so it's an odd failure. Maven tracks what repository things came from: {code} $ cat /Users/busbey/.m2/repository//org/apache/maven/plugins/maven-assembly-plugin/2.4/_remote.repositories #NOTE: This is an Aether internal implementation file, its format can be changed without prior notice. #Tue Nov 29 03:23:40 CST 2016 maven-assembly-plugin-2.4.pom>scala-tools.org= {code} # That's not the right place to get the assembly plugin. Checking the domain confirms: {code} $ curl http://scala-tools.org/repo-releases/org/apache/maven/plugins/maven-assembly-plugin/2.4/maven-assembly-plugin-2.4.pom 302 Found Found The document has moved http://blog.goodstuff.im/repo-releases/org/apache/maven/plugins/maven-assembly-plugin/2.4/maven-assembly-plugin-2.4.pom;>here. Apache Server at scala-tools.org Port 8080 $ curl --location http://scala-tools.org/repo-releases/org/apache/maven/plugins/maven-assembly-plugin/2.4/maven-assembly-plugin-2.4.pom ... (html webpage that indicated 404 not found)... {code} # From what I can tell through old blog posts, scala-tools.org transitioned everything to sonatype in 2012. The domain was expected to stop working in that year, but I guess it must have just happened now. testing fix locally now. > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-17192: Attachment: HBASE-17192.1.test.patch -01.test.patch - includes a trivial change to a scala file (as done in HBASE-15644) to get the scaladoc bit to run. > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17192.1.patch, HBASE-17192.1.test.patch > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704794#comment-15704794 ] Charlie Qiangeng Xu edited comment on HBASE-17110 at 11/29/16 11:07 AM: Checked the failed test "TestHRegionWithInMemoryFlush", it's unrelated to the patch , past on my local build, should be fine was (Author: xharlie): Checked the failed test "TestHRegionWithInMemoryFlush", it's unrelated to the patch and on my local build, should be fine > Add an "Overall Strategy" option(balanced both on table level and server > level) to SimpleLoadBalancer > - > > Key: HBASE-17110 > URL: https://issues.apache.org/jira/browse/HBASE-17110 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 2.0.0, 1.2.4 >Reporter: Charlie Qiangeng Xu >Assignee: Charlie Qiangeng Xu > Attachments: HBASE-17110-V2.patch, HBASE-17110-V3.patch, > HBASE-17110-V4.patch, HBASE-17110-V5.patch, HBASE-17110-V6.patch, > HBASE-17110-V7.patch, HBASE-17110-V8.patch, HBASE-17110.patch > > > This jira is about an enhancement of simpleLoadBalancer. Here we introduce a > new strategy: "bytableOverall" which could be controlled by adding: > {noformat} > > hbase.master.loadbalance.bytableOverall > true > > {noformat} > We have been using the strategy on our largest cluster for several months. > it's proven to be very helpful and stable, especially, the result is quite > visible to the users. > Here is the reason why it's helpful: > When operating large scale clusters(our case), some companies still prefer to > use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan > generation, etc. Current SimpleLoadBalancer has two modes: > 1. byTable, which only guarantees that the regions of one table could be > uniformly distributed. > 2. byCluster, which ignores the distribution within tables and balance the > regions all together. > If the pressures on different tables are different, the first byTable option > is the preferable one in most case. Yet, this choice sacrifice the cluster > level balance and would cause some servers to have significantly higher load, > e.g. 242 regions on server A but 417 regions on server B.(real world stats) > Consider this case, a cluster has 3 tables and 4 servers: > {noformat} > server A has 3 regions: table1:1, table2:1, table3:1 > server B has 3 regions: table1:2, table2:2, table3:2 > server C has 3 regions: table1:3, table2:3, table3:3 > server D has 0 regions. > {noformat} > From the byTable strategy's perspective, the cluster has already been > perfectly balanced on table level. But a perfect status should be like: > {noformat} > server A has 2 regions: table2:1, table3:1 > server B has 2 regions: table1:2, table3:2 > server C has 3 regions: table1:3, table2:3, table3:3 > server D has 2 regions: table1:1, table2:2 > {noformat} > We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, > table2 and table3 still keep balanced. > And this is what the new mode "byTableOverall" can achieve. > Two UTs have been added as well and the last one demonstrates the advantage > of the new strategy. > Also, a onConfigurationChange method has been implemented to hot control the > "slop" variable. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17170) HBase is also retrying DoNotRetryIOException because of class loader differences.
[ https://issues.apache.org/jira/browse/HBASE-17170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Singhal updated HBASE-17170: -- Status: Patch Available (was: Open) > HBase is also retrying DoNotRetryIOException because of class loader > differences. > - > > Key: HBASE-17170 > URL: https://issues.apache.org/jira/browse/HBASE-17170 > Project: HBase > Issue Type: Bug >Reporter: Ankit Singhal >Assignee: Ankit Singhal > Attachments: HBASE-17170.master.001.patch > > > The class loader used by API exposed by hadoop and the context class loader > used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting > in classes loaded from jar not visible to other current class loader used by > API. > {code} > 16/04/26 21:18:00 INFO client.RpcRetryingCaller: Call exception, tries=32, > retries=35, started=491541 ms ago, cancelled=false, msg= > 16/04/26 21:18:21 INFO client.RpcRetryingCaller: Call exception, tries=33, > retries=35, started=511747 ms ago, cancelled=false, msg= > 16/04/26 21:18:41 INFO client.RpcRetryingCaller: Call exception, tries=34, > retries=35, started=531820 ms ago, cancelled=false, msg= > Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: > Failed after attempts=35, exceptions: > Tue Apr 26 21:09:49 UTC 2016, > RpcRetryingCaller{globalStartTime=1461704989282, pause=100, retries=35}, > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NamespaceExistException): > org.apache.hadoop.hbase.NamespaceExistException: SYSTEM > at > org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:156) > at > org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:131) > at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:2553) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:447) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58043) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2115) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102) > {code} > The actual problem is stated in the comment below > https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081 > If we are not loading hbase classes from Hadoop classpath(from where hadoop > jars are getting loaded), then the RemoteException will not get unwrapped > because of ClassNotFoundException and the client will keep on retrying even > if the cause of exception is DoNotRetryIOException. > RunJar#main() context class loader. > {code} > ClassLoader loader = createClassLoader(file, workDir); > Thread.currentThread().setContextClassLoader(loader); > Class mainClass = Class.forName(mainClassName, true, loader); > Method main = mainClass.getMethod("main", new Class[] { > Array.newInstance(String.class, 0).getClass() > }); > HBase classes can be loaded from jar(phoenix-client.jar):- > hadoop --config /etc/hbase/conf/ jar > ~/git/apache/phoenix/phoenix-client/target/phoenix-4.9.0-HBase-1.2-client.jar > org.apache.phoenix.mapreduce.CsvBulkLoadTool --table GIGANTIC_TABLE --input > /tmp/b.csv --zookeeper localhost:2181 > {code} > API(using current class loader). > {code} > public class RpcRetryingCaller { > public IOException unwrapRemoteException() { > try { > Class realClass = Class.forName(getClassName()); > return instantiateException(realClass.asSubclass(IOException.class)); > } catch(Exception e) { > // cannot instantiate the original exception, just return this > } > return this; > } > {code} > *Possible solution:-* > We can create our own HBaseRemoteWithExtrasException(extension of > RemoteWithExtrasException) so that default class loader will be the one from > where the hbase classes are loaded and extend unwrapRemoteException() to > throw exception if the unwrapping doesn’t take place because of CNF > exception? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17167) Pass mvcc to client when scan
[ https://issues.apache.org/jira/browse/HBASE-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704856#comment-15704856 ] Hadoop QA commented on HBASE-17167: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 9s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 4s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 41s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 18s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 39s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s {color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 38s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 38s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 38s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 56s {color} | {color:red} The patch causes 16 errors with Hadoop v2.4.0. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 52s {color} | {color:red} The patch causes 16 errors with Hadoop v2.4.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 48s {color} | {color:red} The patch causes 16 errors with Hadoop v2.5.0. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 44s {color} | {color:red} The patch causes 16 errors with Hadoop v2.5.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 38s {color} | {color:red} The patch causes 16 errors with Hadoop v2.5.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 36s {color} | {color:red} The patch causes 16 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 33s {color} | {color:red} The patch causes 16 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 28s {color} | {color:red} The patch causes 16 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} |
[jira] [Updated] (HBASE-17167) Pass mvcc to client when scan
[ https://issues.apache.org/jira/browse/HBASE-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-17167: -- Attachment: HBASE-17167-branch-1-v1.patch Remove the usage of java8 API... > Pass mvcc to client when scan > - > > Key: HBASE-17167 > URL: https://issues.apache.org/jira/browse/HBASE-17167 > Project: HBase > Issue Type: Sub-task > Components: Client, scan >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17167-branch-1-v1.patch, > HBASE-17167-branch-1.patch, HBASE-17167-v1.patch, HBASE-17167-v2.patch, > HBASE-17167-v3.patch, HBASE-17167-v4.patch, HBASE-17167-v5.patch, > HBASE-17167.patch > > > For the current implementation, if we use batch or allowPartial when scan, > then the row level atomic can not be guaranteed if we need to restart a scan > in the middle of a record due to region move or something else. > We can return the mvcc used to open scanner to client and client could use > this mvcc to restart a scan to get row level atomic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian Yi updated HBASE-17181: Attachment: HBASE-17181-V4.patch Create patch by dev-support/submit-patch.py > Let HBase thrift2 support TThreadedSelectorServer > - > > Key: HBASE-17181 > URL: https://issues.apache.org/jira/browse/HBASE-17181 > Project: HBase > Issue Type: New Feature > Components: Thrift >Affects Versions: 1.2.3 >Reporter: Jian Yi >Priority: Minor > Labels: features > Fix For: 1.2.3 > > Attachments: HBASE-17181-V1.patch, HBASE-17181-V2.patch, > HBASE-17181-V3.patch, HBASE-17181-V4.patch, ThriftServer.java > > Original Estimate: 2h > Remaining Estimate: 2h > > Add TThreadedSelectorServer for HBase Thrift2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17190) eclipse compile master error
[ https://issues.apache.org/jira/browse/HBASE-17190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian Yi updated HBASE-17190: Description: [INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ hbase-thrift --- [WARNING] The POM for org.apache.maven.shared:maven-shared-utils:jar:0.1 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details [WARNING] The POM for org.apache.maven.shared:maven-shared-incremental:jar:1.1 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details [WARNING] The POM for org.codehaus.plexus:plexus-compiler-api:jar:2.4 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details [WARNING] The POM for org.codehaus.plexus:plexus-compiler-manager:jar:2.4 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details [WARNING] The POM for org.codehaus.plexus:plexus-compiler-javac:jar:2.4 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details [WARNING] Error injecting: org.apache.maven.plugin.compiler.CompilerMojo java.lang.NoClassDefFoundError: org/codehaus/plexus/compiler/util/scan/mapping/SuffixMapping at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Unknown Source) at java.lang.Class.getDeclaredConstructors(Unknown Source) at com.google.inject.spi.InjectionPoint.forConstructorOf(InjectionPoint.java:245) at com.google.inject.internal.ConstructorBindingImpl.create(ConstructorBindingImpl.java:99) at com.google.inject.internal.InjectorImpl.createUninitializedBinding(InjectorImpl.java:658) at com.google.inject.internal.InjectorImpl.createJustInTimeBinding(InjectorImpl.java:882) at com.google.inject.internal.InjectorImpl.createJustInTimeBindingRecursive(InjectorImpl.java:805) at com.google.inject.internal.InjectorImpl.getJustInTimeBinding(InjectorImpl.java:282) at com.google.inject.internal.InjectorImpl.getBindingOrThrow(InjectorImpl.java:214) at com.google.inject.internal.InjectorImpl.getProviderOrThrow(InjectorImpl.java:1006) at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1038) at com.google.inject.internal.InjectorImpl.getProvider(InjectorImpl.java:1001) at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1051) at org.eclipse.sisu.space.AbstractDeferredClass.get(AbstractDeferredClass.java:48) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81) at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:53) at com.google.inject.internal.ProviderInternalFactory$1.call(ProviderInternalFactory.java:65) at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:115) at org.eclipse.sisu.bean.BeanScheduler$Activator.onProvision(BeanScheduler.java:176) at com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:126) at com.google.inject.internal.ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:68) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:63) at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:45) at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016) at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012) at org.eclipse.sisu.inject.Guice4$1.get(Guice4.java:162) at org.eclipse.sisu.inject.LazyBeanEntry.getValue(LazyBeanEntry.java:81) at org.eclipse.sisu.plexus.LazyPlexusBean.getValue(LazyPlexusBean.java:51) at org.codehaus.plexus.DefaultPlexusContainer.lookup(DefaultPlexusContainer.java:263) at org.codehaus.plexus.DefaultPlexusContainer.lookup(DefaultPlexusContainer.java:255) at org.apache.maven.plugin.internal.DefaultMavenPluginManager.getConfiguredMojo(DefaultMavenPluginManager.java:517) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:121) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) at
[jira] [Created] (HBASE-17193) Add Metrics for offheap memstore flushes
ramkrishna.s.vasudevan created HBASE-17193: -- Summary: Add Metrics for offheap memstore flushes Key: HBASE-17193 URL: https://issues.apache.org/jira/browse/HBASE-17193 Project: HBase Issue Type: Sub-task Affects Versions: 2.0.0 Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Priority: Minor Like we have MetricsHeapMemoryManager - we need to add MetricsOffheapMemoryManager if we need to have metrics for offheap flushes. I think this may even warrant a OffheapMemoryManager. Will raise a task for that later after discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17167) Pass mvcc to client when scan
[ https://issues.apache.org/jira/browse/HBASE-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704987#comment-15704987 ] Hadoop QA commented on HBASE-17167: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 50s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s {color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 46s {color} | {color:green} branch-1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} branch-1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s {color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s {color} | {color:green} branch-1 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 12s {color} | {color:red} hbase-client in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 12s {color} | {color:red} hbase-client in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 20s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 12s {color} | {color:red} hbase-client in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 15s {color} | {color:red} hbase-client in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 15s {color} | {color:red} hbase-client in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 15s {color} | {color:red} hbase-client in the patch failed with JDK v1.7.0_80. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed with JDK v1.7.0_80. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 6m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 37s {color} | {color:red} The patch causes 24 errors with
[jira] [Commented] (HBASE-17189) TestMasterObserver#wasModifyTableActionCalled uses wrong variables
[ https://issues.apache.org/jira/browse/HBASE-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705027#comment-15705027 ] Hudson commented on HBASE-17189: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2040 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2040/]) HBASE-17189 TestMasterObserver#wasModifyTableActionCalled uses wrong (syuanjiangdev: rev d87df9209a5d0ec035b8f6ddbf437193dcbb8515) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java > TestMasterObserver#wasModifyTableActionCalled uses wrong variables > -- > > Key: HBASE-17189 > URL: https://issues.apache.org/jira/browse/HBASE-17189 > Project: HBase > Issue Type: Test > Components: test >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17189.v1-master.patch > > > TestMasterObserver#wasModifyTableActionCalled() and > TestMasterObserver#wasModifyTableActionCalledOnly() uses > {{preModifyColumnFamilyActionCalled}} and > {{postCompletedModifyColumnFamilyActionCalled}} members, which are wrong. > Instead it should use {{preModifyTableActionCalled}} and > {{postCompletedModifyTableActionCalled}}. This probably was caused by > copy-and-paste mistake. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16302) age of last shipped op and age of last applied op should be histograms
[ https://issues.apache.org/jira/browse/HBASE-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705026#comment-15705026 ] Hudson commented on HBASE-16302: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2040 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2040/]) HBASE-16302 age of last shipped op and age of last applied op should be (ashishsinghi: rev 7bcbac91a2385cd3009bcc277bb0f4d94084c926) * (edit) hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java * (edit) hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableHistogram.java * (edit) hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java * (edit) hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSinkSourceImpl.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsSource.java > age of last shipped op and age of last applied op should be histograms > -- > > Key: HBASE-16302 > URL: https://issues.apache.org/jira/browse/HBASE-16302 > Project: HBase > Issue Type: Improvement > Components: Replication >Reporter: Ashu Pachauri >Assignee: Ashu Pachauri > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16302.patch.v0.patch > > > Replication exports metric ageOfLastShippedOp as an indication of how much > replication is lagging. But, with multiwal enabled, it's not representative > because replication could be lagging for a long time for one wal group > (something wrong with a particular region) while being fine for others. The > ageOfLastShippedOp becomes a useless metric for alerting in such a case. > Also, since there is no mapping between individual replication sources and > replication sinks, the age of last applied op can be a highly spiky metric if > only certain replication sources are lagging. > We should use histograms for these metrics and use maximum value of this > histogram to report replication lag when building stats. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17167) Pass mvcc to client when scan
[ https://issues.apache.org/jira/browse/HBASE-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-17167: -- Attachment: HBASE-17167-branch-1.patch Patch for branch-1. > Pass mvcc to client when scan > - > > Key: HBASE-17167 > URL: https://issues.apache.org/jira/browse/HBASE-17167 > Project: HBase > Issue Type: Sub-task > Components: Client, scan >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17167-branch-1.patch, HBASE-17167-v1.patch, > HBASE-17167-v2.patch, HBASE-17167-v3.patch, HBASE-17167-v4.patch, > HBASE-17167.patch > > > For the current implementation, if we use batch or allowPartial when scan, > then the row level atomic can not be guaranteed if we need to restart a scan > in the middle of a record due to region move or something else. > We can return the mvcc used to open scanner to client and client could use > this mvcc to restart a scan to get row level atomic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704809#comment-15704809 ] Phil Yang commented on HBASE-17178: --- And I am not sure if 0.01 is a good default value because currently we have no throttling which means maxRitPercent = 1.0. If we don't want to change anything by default, we should set it to 1.0? > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704849#comment-15704849 ] Phil Yang commented on HBASE-17178: --- Discussed with [~zghaobac] offline. Throttling logic should be changed to a fix rate rather than fix interval. :) > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17189) TestMasterObserver#wasModifyTableActionCalled uses wrong variables
[ https://issues.apache.org/jira/browse/HBASE-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704864#comment-15704864 ] Hudson commented on HBASE-17189: SUCCESS: Integrated in Jenkins build HBase-1.4 #549 (See [https://builds.apache.org/job/HBase-1.4/549/]) HBASE-17189 TestMasterObserver#wasModifyTableActionCalled uses wrong (syuanjiangdev: rev cdf539a8e0eac7f16401df96169e103005fb) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterObserver.java > TestMasterObserver#wasModifyTableActionCalled uses wrong variables > -- > > Key: HBASE-17189 > URL: https://issues.apache.org/jira/browse/HBASE-17189 > Project: HBase > Issue Type: Test > Components: test >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17189.v1-master.patch > > > TestMasterObserver#wasModifyTableActionCalled() and > TestMasterObserver#wasModifyTableActionCalledOnly() uses > {{preModifyColumnFamilyActionCalled}} and > {{postCompletedModifyColumnFamilyActionCalled}} members, which are wrong. > Instead it should use {{preModifyTableActionCalled}} and > {{postCompletedModifyTableActionCalled}}. This probably was caused by > copy-and-paste mistake. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17170) HBase is also retrying DoNotRetryIOException because of class loader differences.
[ https://issues.apache.org/jira/browse/HBASE-17170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ankit Singhal updated HBASE-17170: -- Attachment: HBASE-17170.master.001.patch > HBase is also retrying DoNotRetryIOException because of class loader > differences. > - > > Key: HBASE-17170 > URL: https://issues.apache.org/jira/browse/HBASE-17170 > Project: HBase > Issue Type: Bug >Reporter: Ankit Singhal >Assignee: Ankit Singhal > Attachments: HBASE-17170.master.001.patch > > > The class loader used by API exposed by hadoop and the context class loader > used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting > in classes loaded from jar not visible to other current class loader used by > API. > {code} > 16/04/26 21:18:00 INFO client.RpcRetryingCaller: Call exception, tries=32, > retries=35, started=491541 ms ago, cancelled=false, msg= > 16/04/26 21:18:21 INFO client.RpcRetryingCaller: Call exception, tries=33, > retries=35, started=511747 ms ago, cancelled=false, msg= > 16/04/26 21:18:41 INFO client.RpcRetryingCaller: Call exception, tries=34, > retries=35, started=531820 ms ago, cancelled=false, msg= > Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: > Failed after attempts=35, exceptions: > Tue Apr 26 21:09:49 UTC 2016, > RpcRetryingCaller{globalStartTime=1461704989282, pause=100, retries=35}, > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NamespaceExistException): > org.apache.hadoop.hbase.NamespaceExistException: SYSTEM > at > org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:156) > at > org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:131) > at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:2553) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:447) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58043) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2115) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102) > {code} > The actual problem is stated in the comment below > https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081 > If we are not loading hbase classes from Hadoop classpath(from where hadoop > jars are getting loaded), then the RemoteException will not get unwrapped > because of ClassNotFoundException and the client will keep on retrying even > if the cause of exception is DoNotRetryIOException. > RunJar#main() context class loader. > {code} > ClassLoader loader = createClassLoader(file, workDir); > Thread.currentThread().setContextClassLoader(loader); > Class mainClass = Class.forName(mainClassName, true, loader); > Method main = mainClass.getMethod("main", new Class[] { > Array.newInstance(String.class, 0).getClass() > }); > HBase classes can be loaded from jar(phoenix-client.jar):- > hadoop --config /etc/hbase/conf/ jar > ~/git/apache/phoenix/phoenix-client/target/phoenix-4.9.0-HBase-1.2-client.jar > org.apache.phoenix.mapreduce.CsvBulkLoadTool --table GIGANTIC_TABLE --input > /tmp/b.csv --zookeeper localhost:2181 > {code} > API(using current class loader). > {code} > public class RpcRetryingCaller { > public IOException unwrapRemoteException() { > try { > Class realClass = Class.forName(getClassName()); > return instantiateException(realClass.asSubclass(IOException.class)); > } catch(Exception e) { > // cannot instantiate the original exception, just return this > } > return this; > } > {code} > *Possible solution:-* > We can create our own HBaseRemoteWithExtrasException(extension of > RemoteWithExtrasException) so that default class loader will be the one from > where the hbase classes are loaded and extend unwrapRemoteException() to > throw exception if the unwrapping doesn’t take place because of CNF > exception? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-17192: Attachment: HBASE-17192.1.patch -01 - removes scala-tools-org - manually tested the maven invocation used in the website generation job. > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17192.1.patch > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-17192: Status: Patch Available (was: In Progress) > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17192.1.patch > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets
[ https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16859: --- Attachment: HBASE-16859_V1.patch Patch for trunk. For non-java client case now we try to make use of the ByteBufferPool and we create ByteBufOutput based on our ByteBuff. There are some TODOs added and we can discuss over here. Added test cases also. Will put this in RB. > Use Bytebuffer pool for non java clients specifically for scans/gets > > > Key: HBASE-16859 > URL: https://issues.apache.org/jira/browse/HBASE-16859 > Project: HBase > Issue Type: Sub-task >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-16859_V1.patch > > > In case of non java clients we still write the results and header into a on > demand byte[]. This can be changed to use the BBPool (onheap or offheap > buffer?). > But the basic problem is to identify if the response is for scans/gets. > - One easy way to do it is use the MethodDescriptor per Call and use the > name of the MethodDescriptor to identify it is a scan/get. But this will > pollute RpcServer by checking for scan/get type response. > - Other way is always set the result to cellScanner but we know that > isClientCellBlockSupported is going to false for non PB clients. So ignore > the cellscanner and go ahead with the results in PB. But this is not clean > - third one is that we already have a RpccallContext being passed to the RS. > In case of scan/gets/multiGets we already set a Rpccallback for shipped call. > So here on response we can check if the callback is not null and check for > isclientBlockSupported. In this case we can get the BB from the pool and > write the result and header to that BB. May be this looks clean? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704914#comment-15704914 ] Duo Zhang commented on HBASE-17192: --- +1. Nice work. > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > Attachments: HBASE-17192.1.patch, HBASE-17192.1.test.patch > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17086) Add comments to explain why Cell#getTagsLength() returns an int, rather than a short
[ https://issues.apache.org/jira/browse/HBASE-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705025#comment-15705025 ] Hudson commented on HBASE-17086: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2040 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2040/]) HBASE-17086: Add comments to explain why Cell#getTagsLength() returns an (anoopsamjohn: rev 346e904a210540ffe11863547319c51a233b43f7) * (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/Cell.java > Add comments to explain why Cell#getTagsLength() returns an int, rather than > a short > > > Key: HBASE-17086 > URL: https://issues.apache.org/jira/browse/HBASE-17086 > Project: HBase > Issue Type: Improvement > Components: Interface >Affects Versions: 2.0.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17086.master.000.patch, > HBASE-17086.master.001.patch > > > In the Cell interface, getTagsLength() returns a int > But in the KeyValue implementation, tags length is of 2 bytes. Also in > ExtendedCell, when explaining the KeyValue format, tags length is stated to > be 2 bytes > Any plan to update Cell interface to make getTagsLength() returns a short ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17186) MasterProcedureTestingUtility#testRecoveryAndDoubleExecution displays stale procedure state info
[ https://issues.apache.org/jira/browse/HBASE-17186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705024#comment-15705024 ] Hudson commented on HBASE-17186: SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2040 (See [https://builds.apache.org/job/HBase-Trunk_matrix/2040/]) HBASE-17186 MasterProcedureTestingUtility#testRecoveryAndDoubleExecution (syuanjiangdev: rev b2d3fa1a8a8910f34970791e6a615699f2211f67) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/MasterProcedureTestingUtility.java > MasterProcedureTestingUtility#testRecoveryAndDoubleExecution displays stale > procedure state info > > > Key: HBASE-17186 > URL: https://issues.apache.org/jira/browse/HBASE-17186 > Project: HBase > Issue Type: Bug > Components: proc-v2, test >Affects Versions: 2.0.0 >Reporter: Stephen Yuan Jiang >Assignee: Stephen Yuan Jiang >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17186.v1-master.patch, HBASE-17186.v2-master.patch > > > MasterProcedureTestingUtility#testRecoveryAndDoubleExecution get the > procedure information at the beginning of the function, but never updates the > information. As procedure executes and moves to new state, it still log the > stale state information. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk
[ https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704728#comment-15704728 ] Anastasia Braginsky commented on HBASE-17081: - Good day everybody! :-) I think the way it is implemented now is quite OK. However I am going to publish the new patch today where CompositeImmutableSegment is going to work (and has interfaces) which is even more close to the Segment. Let take a look on it soon and you will tell me your opinion then. I think we are quite close to an optimal solution and it is going to be good! :) > Flush the entire CompactingMemStore content to disk > --- > > Key: HBASE-17081 > URL: https://issues.apache.org/jira/browse/HBASE-17081 > Project: HBase > Issue Type: Sub-task >Reporter: Anastasia Braginsky >Assignee: Anastasia Braginsky > Attachments: HBASE-17081-V01.patch, HBASE-17081-V02.patch, > HBASE-17081-V03.patch, Pipelinememstore_fortrunk_3.patch > > > Part of CompactingMemStore's memory is held by an active segment, and another > part is divided between immutable segments in the compacting pipeline. Upon > flush-to-disk request we want to flush all of it to disk, in contrast to > flushing only tail of the compacting pipeline. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer
[ https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704794#comment-15704794 ] Charlie Qiangeng Xu commented on HBASE-17110: - Checked the failed test "TestHRegionWithInMemoryFlush", it's unrelated to the patch and on my local build, should be fine > Add an "Overall Strategy" option(balanced both on table level and server > level) to SimpleLoadBalancer > - > > Key: HBASE-17110 > URL: https://issues.apache.org/jira/browse/HBASE-17110 > Project: HBase > Issue Type: Improvement > Components: Balancer >Affects Versions: 2.0.0, 1.2.4 >Reporter: Charlie Qiangeng Xu >Assignee: Charlie Qiangeng Xu > Attachments: HBASE-17110-V2.patch, HBASE-17110-V3.patch, > HBASE-17110-V4.patch, HBASE-17110-V5.patch, HBASE-17110-V6.patch, > HBASE-17110-V7.patch, HBASE-17110-V8.patch, HBASE-17110.patch > > > This jira is about an enhancement of simpleLoadBalancer. Here we introduce a > new strategy: "bytableOverall" which could be controlled by adding: > {noformat} > > hbase.master.loadbalance.bytableOverall > true > > {noformat} > We have been using the strategy on our largest cluster for several months. > it's proven to be very helpful and stable, especially, the result is quite > visible to the users. > Here is the reason why it's helpful: > When operating large scale clusters(our case), some companies still prefer to > use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan > generation, etc. Current SimpleLoadBalancer has two modes: > 1. byTable, which only guarantees that the regions of one table could be > uniformly distributed. > 2. byCluster, which ignores the distribution within tables and balance the > regions all together. > If the pressures on different tables are different, the first byTable option > is the preferable one in most case. Yet, this choice sacrifice the cluster > level balance and would cause some servers to have significantly higher load, > e.g. 242 regions on server A but 417 regions on server B.(real world stats) > Consider this case, a cluster has 3 tables and 4 servers: > {noformat} > server A has 3 regions: table1:1, table2:1, table3:1 > server B has 3 regions: table1:2, table2:2, table3:2 > server C has 3 regions: table1:3, table2:3, table3:3 > server D has 0 regions. > {noformat} > From the byTable strategy's perspective, the cluster has already been > perfectly balanced on table level. But a perfect status should be like: > {noformat} > server A has 2 regions: table2:1, table3:1 > server B has 2 regions: table1:2, table3:2 > server C has 3 regions: table1:3, table2:3, table3:3 > server D has 2 regions: table1:1, table2:2 > {noformat} > We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, > table2 and table3 still keep balanced. > And this is what the new mode "byTableOverall" can achieve. > Two UTs have been added as well and the last one demonstrates the advantage > of the new strategy. > Also, a onConfigurationChange method has been implemented to hot control the > "slop" variable. > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16859) Use Bytebuffer pool for non java clients specifically for scans/gets
[ https://issues.apache.org/jira/browse/HBASE-16859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-16859: --- Status: Patch Available (was: Open) > Use Bytebuffer pool for non java clients specifically for scans/gets > > > Key: HBASE-16859 > URL: https://issues.apache.org/jira/browse/HBASE-16859 > Project: HBase > Issue Type: Sub-task >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-16859_V1.patch > > > In case of non java clients we still write the results and header into a on > demand byte[]. This can be changed to use the BBPool (onheap or offheap > buffer?). > But the basic problem is to identify if the response is for scans/gets. > - One easy way to do it is use the MethodDescriptor per Call and use the > name of the MethodDescriptor to identify it is a scan/get. But this will > pollute RpcServer by checking for scan/get type response. > - Other way is always set the result to cellScanner but we know that > isClientCellBlockSupported is going to false for non PB clients. So ignore > the cellscanner and go ahead with the results in PB. But this is not clean > - third one is that we already have a RpccallContext being passed to the RS. > In case of scan/gets/multiGets we already set a Rpccallback for shipped call. > So here on response we can check if the callback is not null and check for > isclientBlockSupported. In this case we can get the BB from the pool and > write the result and header to that BB. May be this looks clean? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16904) remove directory layout / fs references from snapshots
[ https://issues.apache.org/jira/browse/HBASE-16904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704692#comment-15704692 ] Hadoop QA commented on HBASE-16904: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 28 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 57s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 14s {color} | {color:green} hbase-14439 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 26s {color} | {color:green} hbase-14439 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 7s {color} | {color:green} hbase-14439 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 5s {color} | {color:green} hbase-14439 passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 52s {color} | {color:red} hbase-server in hbase-14439 has 5 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s {color} | {color:green} hbase-14439 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 16s {color} | {color:green} Patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 1m 38s {color} | {color:green} the patch passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 6s {color} | {color:red} hbase-server generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 28s {color} | {color:red} hbase-server generated 14 new + 11 unchanged - 0 fixed = 25 total (was 11) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 57s {color} | {color:red} root generated 14 new + 30 unchanged - 0 fixed = 44 total (was 30) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 2s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 15s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 105m 21s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | Dead store to conf in org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createRegionsOnStorage(MasterProcedureEnv, TableName, List) At CloneSnapshotProcedure.java:org.apache.hadoop.hbase.master.procedure.CloneSnapshotProcedure$1.createRegionsOnStorage(MasterProcedureEnv, TableName, List) At CloneSnapshotProcedure.java:[line 348] | | Failed junit tests | hadoop.hbase.regionserver.TestKeepDeletes | | |
[jira] [Updated] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17178: --- Attachment: HBASE-17178-v2.patch > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704792#comment-15704792 ] Phil Yang commented on HBASE-17178: --- {code} double maxRitPercent = getConfiguration().getDouble("hbase.master.balancer.maxRitPercent", 0.01); {code} Add the key and the default value in HConstants? And I think we can add this to hbase-default.xml with a description so that we can see it in hbase book. > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17086) Add comments to explain why Cell#getTagsLength() returns an int, rather than a short
[ https://issues.apache.org/jira/browse/HBASE-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704671#comment-15704671 ] Xiang Li commented on HBASE-17086: -- Anoop, thanks for the review and guide! > Add comments to explain why Cell#getTagsLength() returns an int, rather than > a short > > > Key: HBASE-17086 > URL: https://issues.apache.org/jira/browse/HBASE-17086 > Project: HBase > Issue Type: Improvement > Components: Interface >Affects Versions: 2.0.0 >Reporter: Xiang Li >Assignee: Xiang Li >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-17086.master.000.patch, > HBASE-17086.master.001.patch > > > In the Cell interface, getTagsLength() returns a int > But in the KeyValue implementation, tags length is of 2 bytes. Also in > ExtendedCell, when explaining the KeyValue format, tags length is stated to > be 2 bytes > Any plan to update Cell interface to make getTagsLength() returns a short ? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17167) Pass mvcc to client when scan
[ https://issues.apache.org/jira/browse/HBASE-17167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-17167: -- Attachment: HBASE-17167-v5.patch Use scan.resetMvccReadPoint() instead of scan.setMvccReadPoint(0). > Pass mvcc to client when scan > - > > Key: HBASE-17167 > URL: https://issues.apache.org/jira/browse/HBASE-17167 > Project: HBase > Issue Type: Sub-task > Components: Client, scan >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-17167-branch-1.patch, HBASE-17167-v1.patch, > HBASE-17167-v2.patch, HBASE-17167-v3.patch, HBASE-17167-v4.patch, > HBASE-17167-v5.patch, HBASE-17167.patch > > > For the current implementation, if we use batch or allowPartial when scan, > then the row level atomic can not be guaranteed if we need to restart a scan > in the middle of a record due to region move or something else. > We can return the mvcc used to open scanner to client and client could use > this mvcc to restart a scan to get row level atomic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704791#comment-15704791 ] Guanghao Zhang commented on HBASE-17178: Agreed. The max ratio is better than an absolute value by the config hbase.balancer.max.balancing.regions. > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17170) HBase is also retrying DoNotRetryIOException because of class loader differences.
[ https://issues.apache.org/jira/browse/HBASE-17170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704887#comment-15704887 ] Ankit Singhal commented on HBASE-17170: --- bq. Want to attach a patch ? Uploaded a patch to use class loader from where the hbase classes are loaded and falling back to hadoop classloader for hadoop exceptions. bq. This naming is a tautology because RemoteWithExtrasException is an HBase API class.If you need to change the behavior of RemoteWithExtrasException then add a new method yes [~apurtell], Didn't realize earlier that it is hbase class. so have added a method in this class only bq. Although, wouldn't a better solution be to try and find and use the context classloader where we are looking up the class? Not sure whether we need context class loader as we need to unwrap remote exceptions only, so IMO, we need a classloader from where the hbase classes or dynamic classes are loaded to ensure that server exception is captured. > HBase is also retrying DoNotRetryIOException because of class loader > differences. > - > > Key: HBASE-17170 > URL: https://issues.apache.org/jira/browse/HBASE-17170 > Project: HBase > Issue Type: Bug >Reporter: Ankit Singhal >Assignee: Ankit Singhal > Attachments: HBASE-17170.master.001.patch > > > The class loader used by API exposed by hadoop and the context class loader > used by RunJar(bin/hadoop jar phoenix-client.jar …. ) are different resulting > in classes loaded from jar not visible to other current class loader used by > API. > {code} > 16/04/26 21:18:00 INFO client.RpcRetryingCaller: Call exception, tries=32, > retries=35, started=491541 ms ago, cancelled=false, msg= > 16/04/26 21:18:21 INFO client.RpcRetryingCaller: Call exception, tries=33, > retries=35, started=511747 ms ago, cancelled=false, msg= > 16/04/26 21:18:41 INFO client.RpcRetryingCaller: Call exception, tries=34, > retries=35, started=531820 ms ago, cancelled=false, msg= > Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException: > Failed after attempts=35, exceptions: > Tue Apr 26 21:09:49 UTC 2016, > RpcRetryingCaller{globalStartTime=1461704989282, pause=100, retries=35}, > org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.NamespaceExistException): > org.apache.hadoop.hbase.NamespaceExistException: SYSTEM > at > org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:156) > at > org.apache.hadoop.hbase.master.TableNamespaceManager.create(TableNamespaceManager.java:131) > at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:2553) > at > org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:447) > at > org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58043) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2115) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102) > {code} > The actual problem is stated in the comment below > https://issues.apache.org/jira/browse/PHOENIX-3495?focusedCommentId=15677081=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15677081 > If we are not loading hbase classes from Hadoop classpath(from where hadoop > jars are getting loaded), then the RemoteException will not get unwrapped > because of ClassNotFoundException and the client will keep on retrying even > if the cause of exception is DoNotRetryIOException. > RunJar#main() context class loader. > {code} > ClassLoader loader = createClassLoader(file, workDir); > Thread.currentThread().setContextClassLoader(loader); > Class mainClass = Class.forName(mainClassName, true, loader); > Method main = mainClass.getMethod("main", new Class[] { > Array.newInstance(String.class, 0).getClass() > }); > HBase classes can be loaded from jar(phoenix-client.jar):- > hadoop --config /etc/hbase/conf/ jar > ~/git/apache/phoenix/phoenix-client/target/phoenix-4.9.0-HBase-1.2-client.jar > org.apache.phoenix.mapreduce.CsvBulkLoadTool --table GIGANTIC_TABLE --input > /tmp/b.csv --zookeeper localhost:2181 > {code} > API(using current class loader). > {code} > public class RpcRetryingCaller { > public IOException unwrapRemoteException() { > try { > Class realClass = Class.forName(getClassName()); > return instantiateException(realClass.asSubclass(IOException.class)); > } catch(Exception e) { > // cannot instantiate the original exception, just return this > } > return this; > } > {code} > *Possible solution:-* > We can create our own HBaseRemoteWithExtrasException(extension of > RemoteWithExtrasException) so that default class loader will be the one from >
[jira] [Commented] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704908#comment-15704908 ] Jian Yi commented on HBASE-17181: - Thanks > Let HBase thrift2 support TThreadedSelectorServer > - > > Key: HBASE-17181 > URL: https://issues.apache.org/jira/browse/HBASE-17181 > Project: HBase > Issue Type: New Feature > Components: Thrift >Affects Versions: 1.2.3 >Reporter: Jian Yi >Priority: Minor > Labels: features > Fix For: 1.2.3 > > Attachments: HBASE-17181-V1.patch, HBASE-17181-V2.patch, > HBASE-17181-V3.patch, HBASE-17181-V4.patch, ThriftServer.java > > Original Estimate: 2h > Remaining Estimate: 2h > > Add TThreadedSelectorServer for HBase Thrift2 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-17178: --- Attachment: HBASE-17178-v3.patch Attach a v3 patch which addressed the review comments. > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch, > HBASE-17178-v3.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17178) Add region balance throttling
[ https://issues.apache.org/jira/browse/HBASE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15705012#comment-15705012 ] Phil Yang commented on HBASE-17178: --- +1 on v3 patch. Let's see QA result. > Add region balance throttling > - > > Key: HBASE-17178 > URL: https://issues.apache.org/jira/browse/HBASE-17178 > Project: HBase > Issue Type: Improvement > Components: Balancer >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang > Attachments: HBASE-17178-v1.patch, HBASE-17178-v2.patch, > HBASE-17178-v3.patch > > > Our online cluster serves dozens of tables and different tables serve for > different services. If the balancer moves too many regions in the same time, > it will decrease the availability for some table or some services. So we add > region balance throttling on our online serve cluster. > We introduce a new config hbase.balancer.max.balancing.regions, which means > the max number of regions in transition when balancing. > If we config this to 1 and a table have 100 regions, then the table will have > 99 regions available at any time. It helps a lot for our use case and it has > been running a long time > our production cluster. > But for some use case, we need the balancer run faster. If a cluster has 100 > regionservers, then it add 50 new regionservers for peak requests. Then it > need balancer run as soon as > possible and let the cluster reach a balance state soon. Our idea is compute > max number of regions in transition by the max balancing time and the average > time of region in transition. > Then the balancer use the computed value to throttling. > Examples for understanding. > A cluster has 100 regionservers, each regionserver has 200 regions and the > average time of region in transition is 1 seconds, we config the max > balancing time is 10 * 60 seconds. > Case 1. One regionserver crash, the cluster at most need balance 200 regions. > Then 200 / (10 * 60s / 1s) < 1, it means the max number of regions in > transition is 1 when balancing. Then the balancer can move region one by one > and the cluster will have high availability when balancing. > Case 2. Add other 100 regionservers, the cluster at most need balance 1 > regions. Then 1 / (10 * 60s / 1s) = 16.7, it means the max number of > regions in transition is 17 when balancing. Then the cluster can reach a > balance state within the max balancing time. > Any suggestions are welcomed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-17192) remove use of scala-tools.org from pom
[ https://issues.apache.org/jira/browse/HBASE-17192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-17192 started by Sean Busbey. --- > remove use of scala-tools.org from pom > -- > > Key: HBASE-17192 > URL: https://issues.apache.org/jira/browse/HBASE-17192 > Project: HBase > Issue Type: Bug > Components: spark, website >Affects Versions: 2.0.0 >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Blocker > Fix For: 2.0.0 > > > our pom makes use of scala-tools.org for a repository. That domain currently > issues redirects for all URLs; for maven coordinates those redirects lead to > 'not found' and the 'permantenly moved' HTML gets saved. this corrupts the > local maven repository in a way that cause the mvn:site goal to give an > opaque error: > {code} > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 01:46 min > [INFO] Finished at: 2016-11-28T14:17:10+00:00 > [INFO] Final Memory: 292M/6583M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Execution default-site of goal > org.apache.maven.plugins:maven-site-plugin:3.4:site failed: For artifact > {null:null:null:jar}: The groupId cannot be empty. -> [Help 1] > [ERROR] > {code} > Rerunning in debug mode with {{mvn -X}} gives no additional useful > information. > All artifacts from scala-tools.org are now found in maven central. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17181) Let HBase thrift2 support TThreadedSelectorServer
[ https://issues.apache.org/jira/browse/HBASE-17181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15704951#comment-15704951 ] Hadoop QA commented on HBASE-17181: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 3 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 24m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s {color} | {color:green} hbase-thrift in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 7s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 42s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12840843/HBASE-17181-V4.patch | | JIRA Issue | HBASE-17181 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5fc1bcee57ae 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 7bcbac9 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/4676/artifact/patchprocess/whitespace-eol.txt | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/4676/artifact/patchprocess/whitespace-tabs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4676/testReport/ | | modules | C: hbase-thrift U: hbase-thrift | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4676/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org |
[jira] [Updated] (HBASE-16941) FavoredNodes - Split/Merge code paths
[ https://issues.apache.org/jira/browse/HBASE-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16941: - Attachment: HBASE-16941.master.007.patch > FavoredNodes - Split/Merge code paths > - > > Key: HBASE-16941 > URL: https://issues.apache.org/jira/browse/HBASE-16941 > Project: HBase > Issue Type: Sub-task >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0 > > Attachments: HBASE-16941.master.001.patch, > HBASE-16941.master.002.patch, HBASE-16941.master.003.patch, > HBASE-16941.master.004.patch, HBASE-16941.master.005.patch, > HBASE-16941.master.006.patch, HBASE-16941.master.007.patch > > > This jira is to deal with the split/merge logic discussed as part of > HBASE-15532. The design document can be seen at HBASE-15531. The specific > changes are: > Split and merged regions should inherit favored node information from parent > regions. For splits also include some randomness so even if there are > subsequent splits, the regions will be more or less distributed. For split, > we include 2 FN from the parent and generate one random node. -- This message was sent by Atlassian JIRA (v6.3.4#6332)