[jira] [Commented] (HBASE-15787) Change the flush related heuristics to work with offheap size configured
[ https://issues.apache.org/jira/browse/HBASE-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653345#comment-15653345 ] ramkrishna.s.vasudevan commented on HBASE-15787: Am not able to add this to RB unless HBASE-15786 is completed I believe. Will work on adding test cases. > Change the flush related heuristics to work with offheap size configured > > > Key: HBASE-15787 > URL: https://issues.apache.org/jira/browse/HBASE-15787 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-15787.patch > > > With offheap MSLAB in place we may have to change the flush related > heuristics to work with offheap size configured rather than the java heap > size. > Since we now have clear seperation of the memstore data size and memstore > heap size, for offheap memstore > -> Decide if the global.offheap.memstore.size is breached for blocking > updates and force flushes. > -> If the onheap global.memstore.size is breached (due to heap overhead) even > then block updates and force flushes. > -> The global.memstore.size.lower.limit is now by default 95% of the > global.memstore.size. So now we apply this 95% on the > global.offheap.memstore.size and also on global.memstore.size (as it was done > for onheap case). > -> We will have new FlushTypes introduces > {code} > ABOVE_ONHEAP_LOWER_MARK, /* happens due to lower mark breach of onheap > memstore settings > An offheap memstore can even breach the > onheap_lower_mark*/ > ABOVE_ONHEAP_HIGHER_MARK,/* happens due to higher mark breach of onheap > memstore settings > An offheap memstore can even breach the > onheap_higher_mark*/ > ABOVE_OFFHEAP_LOWER_MARK,/* happens due to lower mark breach of offheap > memstore settings*/ > ABOVE_OFFHEAP_HIGHER_MARK; > {code} > -> regionServerAccounting does all the accounting. > -> HeapMemoryTuner is what is litte tricky here. First thing to note is that > at no point it will try to increase or decrease the > global.offheap.memstore.size. If there is a heap pressure then it will try to > increase the memstore heap limit. > In case of offheap memstore there is always a chance that the heap pressure > does not increase. In that case we could ideally decrease the heap limit for > memstore. The current logic of heapmemory tuner is such that things will > naturally settle down. But on discussion what we thought is let us include > the flush count that happens due to offheap pressure but give that a lesser > weightage and thus ensure that the initial decrease on memstore heap limit > does not happen. Currently that fraction is set as 0.5. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15787) Change the flush related heuristics to work with offheap size configured
[ https://issues.apache.org/jira/browse/HBASE-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-15787: --- Description: With offheap MSLAB in place we may have to change the flush related heuristics to work with offheap size configured rather than the java heap size. Since we now have clear seperation of the memstore data size and memstore heap size, for offheap memstore -> Decide if the global.offheap.memstore.size is breached for blocking updates and force flushes. -> If the onheap global.memstore.size is breached (due to heap overhead) even then block updates and force flushes. -> The global.memstore.size.lower.limit is now by default 95% of the global.memstore.size. So now we apply this 95% on the global.offheap.memstore.size and also on global.memstore.size (as it was done for onheap case). -> We will have new FlushTypes introduces {code} ABOVE_ONHEAP_LOWER_MARK, /* happens due to lower mark breach of onheap memstore settings An offheap memstore can even breach the onheap_lower_mark*/ ABOVE_ONHEAP_HIGHER_MARK,/* happens due to higher mark breach of onheap memstore settings An offheap memstore can even breach the onheap_higher_mark*/ ABOVE_OFFHEAP_LOWER_MARK,/* happens due to lower mark breach of offheap memstore settings*/ ABOVE_OFFHEAP_HIGHER_MARK; {code} -> regionServerAccounting does all the accounting. -> HeapMemoryTuner is what is litte tricky here. First thing to note is that at no point it will try to increase or decrease the global.offheap.memstore.size. If there is a heap pressure then it will try to increase the memstore heap limit. In case of offheap memstore there is always a chance that the heap pressure does not increase. In that case we could ideally decrease the heap limit for memstore. The current logic of heapmemory tuner is such that things will naturally settle down. But on discussion what we thought is let us include the flush count that happens due to offheap pressure but give that a lesser weightage and thus ensure that the initial decrease on memstore heap limit does not happen. Currently that fraction is set as 0.5. was:With offheap MSLAB in place we may have to change the flush related heuristics to work with offheap size configured rather than the java heap size. > Change the flush related heuristics to work with offheap size configured > > > Key: HBASE-15787 > URL: https://issues.apache.org/jira/browse/HBASE-15787 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-15787.patch > > > With offheap MSLAB in place we may have to change the flush related > heuristics to work with offheap size configured rather than the java heap > size. > Since we now have clear seperation of the memstore data size and memstore > heap size, for offheap memstore > -> Decide if the global.offheap.memstore.size is breached for blocking > updates and force flushes. > -> If the onheap global.memstore.size is breached (due to heap overhead) even > then block updates and force flushes. > -> The global.memstore.size.lower.limit is now by default 95% of the > global.memstore.size. So now we apply this 95% on the > global.offheap.memstore.size and also on global.memstore.size (as it was done > for onheap case). > -> We will have new FlushTypes introduces > {code} > ABOVE_ONHEAP_LOWER_MARK, /* happens due to lower mark breach of onheap > memstore settings > An offheap memstore can even breach the > onheap_lower_mark*/ > ABOVE_ONHEAP_HIGHER_MARK,/* happens due to higher mark breach of onheap > memstore settings > An offheap memstore can even breach the > onheap_higher_mark*/ > ABOVE_OFFHEAP_LOWER_MARK,/* happens due to lower mark breach of offheap > memstore settings*/ > ABOVE_OFFHEAP_HIGHER_MARK; > {code} > -> regionServerAccounting does all the accounting. > -> HeapMemoryTuner is what is litte tricky here. First thing to note is that > at no point it will try to increase or decrease the > global.offheap.memstore.size. If there is a heap pressure then it will try to > increase the memstore heap limit. > In case of offheap memstore there is always a chance that the heap pressure > does not increase. In that case we could ideally decrease the heap limit for > memstore. The current logic of heapmemory tuner is such that things will > naturally settle down. But on discussion what we thought is let us include > the flush count that happens due to offheap pressure but give that a lesser > weightage and thus ensure
[jira] [Updated] (HBASE-15787) Change the flush related heuristics to work with offheap size configured
[ https://issues.apache.org/jira/browse/HBASE-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ramkrishna.s.vasudevan updated HBASE-15787: --- Attachment: HBASE-15787.patch This patch is built on top of HBASE-15786. There are no tests added for now. Will add in subsequent patches. Will update the description. > Change the flush related heuristics to work with offheap size configured > > > Key: HBASE-15787 > URL: https://issues.apache.org/jira/browse/HBASE-15787 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan > Fix For: 2.0.0 > > Attachments: HBASE-15787.patch > > > With offheap MSLAB in place we may have to change the flush related > heuristics to work with offheap size configured rather than the java heap > size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653308#comment-15653308 ] Hadoop QA commented on HBASE-16985: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 5s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 14s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} branch-1.1 passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s {color} | {color:green} branch-1.1 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s {color} | {color:green} branch-1.1 passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s {color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s {color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_111. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 41s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s {color} | {color:green} the patch passed with JDK v1.8.0_111 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 12m 26s {color} | {color:green} The patch does not cause any errors with Hadoop 2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} | | {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s {color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} the patch passed with JDK v1.7.0_80 {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 12s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 126m 16s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.master.TestGetLastFlushedSequenceId | | | org.apache.hadoop.hbase.master.TestMasterFailover | | | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer | | | org.apache.hadoop.hbase.master.TestDistributedLogSplitting | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:35e2245 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838293/HBASE-16985-branch-1.1.patch | | JIRA
[jira] [Resolved] (HBASE-17061) HBase shell split command may be unusable if region name includes binary-encoded data
[ https://issues.apache.org/jira/browse/HBASE-17061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeongdae Kim resolved HBASE-17061. -- Resolution: Duplicate already fixed by HBASE-14767 > HBase shell split command may be unusable if region name includes > binary-encoded data > - > > Key: HBASE-17061 > URL: https://issues.apache.org/jira/browse/HBASE-17061 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 1.2.3 >Reporter: Jeongdae Kim >Priority: Minor > > similar to HBASE-15032, HBASE-4160, and HBASE-4115, if split point have some > binary characters, table will be split with wrong split point like below. > hbase(main):001:0> split "test1", "\xFF\x01\x12" > then, table will be split with "\xEF\xBF\xBD\x00\x12". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17061) HBase shell split command may be unusable if region name includes binary-encoded data
[ https://issues.apache.org/jira/browse/HBASE-17061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeongdae Kim updated HBASE-17061: - Affects Version/s: 1.2.3 > HBase shell split command may be unusable if region name includes > binary-encoded data > - > > Key: HBASE-17061 > URL: https://issues.apache.org/jira/browse/HBASE-17061 > Project: HBase > Issue Type: Bug > Components: shell >Affects Versions: 1.2.3 >Reporter: Jeongdae Kim >Priority: Minor > > similar to HBASE-15032, HBASE-4160, and HBASE-4115, if split point have some > binary characters, table will be split with wrong split point like below. > hbase(main):001:0> split "test1", "\xFF\x01\x12" > then, table will be split with "\xEF\xBF\xBD\x00\x12". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17061) HBase shell split command may be unusable if region name includes binary-encoded data
Jeongdae Kim created HBASE-17061: Summary: HBase shell split command may be unusable if region name includes binary-encoded data Key: HBASE-17061 URL: https://issues.apache.org/jira/browse/HBASE-17061 Project: HBase Issue Type: Bug Components: shell Reporter: Jeongdae Kim Priority: Minor similar to HBASE-15032, HBASE-4160, and HBASE-4115, if split point have some binary characters, table will be split with wrong split point like below. hbase(main):001:0> split "test1", "\xFF\x01\x12" then, table will be split with "\xEF\xBF\xBD\x00\x12". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653159#comment-15653159 ] stack commented on HBASE-16985: --- @busbey You ok w/ backport to branch-1.2. Test failure fix. Problem since hbase 1.0.0. Thanks. > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.1.patch, > HBASE-16985-branch-1.2.patch, HBASE-16985-branch-1.patch, > HBASE-16985-branch-1.patch, HBASE-16985-v1.patch, HBASE-16985-v1.patch, > HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15786) Create Offheap Memstore
[ https://issues.apache.org/jira/browse/HBASE-15786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653155#comment-15653155 ] stack commented on HBASE-15786: --- Ok. Yeah, update the subject. Ok on MSLABImpl given it implements MemStoreLAB. Make sure you add to class comment that this is going on (that mostly from pool but if not, can be offheap and onheap references...) bq. this is for the Cells NOT of type ExtendedCell.. Actually speaking all cells in server side is supposed to be ExtendedCell type any way. Still for the completion of the impl, we consider other also. Say this in code comments. This helps. bq. This is existing ref. I did not add in this patch. Makes sense. Came in for inmemory compaction. bq. No. This tuner tune only Heap memory. Ok. Make clear in comments (if not already) > Create Offheap Memstore > --- > > Key: HBASE-15786 > URL: https://issues.apache.org/jira/browse/HBASE-15786 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: ramkrishna.s.vasudevan >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15786.patch > > > We can make use of MSLAB pool for this off heap memstore. > Right now one can specify the global memstore size (heap size) as a % of max > memory using a config. We will add another config with which one can specify > the global off heap memstore size. This will be exact size not as %. When off > heap memstore in use, we will give this entire area for the MSLAB pool and > that will create off heap chunks. So when cells are added to memstore, the > cell data gets copied into the off heap MSLAB chunk spaces. Note that when > the pool size is not really enough and we need additional chunk creation, we > wont use off heap area for that. We dony want to create so many on demand > DBBs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17055) Disabling table not getting enabled after clean cluster restart.
[ https://issues.apache.org/jira/browse/HBASE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653153#comment-15653153 ] Stephen Yuan Jiang commented on HBASE-17055: [~sreenivasulureddy], you showed log that EnabledTable procedure to skip this region ({{procedure.EnableTableProcedure: Skipping assign for the region...}}). Could you search the log and see where other places this {{1890fa9c085dcc2ee0602f4bab069d10}} region is logged. I'd like to have more information why AM/SSH skip this region during restart. > Disabling table not getting enabled after clean cluster restart. > > > Key: HBASE-17055 > URL: https://issues.apache.org/jira/browse/HBASE-17055 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.3.0 >Reporter: Y. SREENIVASULU REDDY >Assignee: Stephen Yuan Jiang > Fix For: 1.3.0 > > > scenario: > 1. Disable the table, while disabling the table is in progress. > 2. Restart whole HBase service. > 3. Then enable the table. > the above operation leads to RIT continously. > pls find the below logs for understanding. > while disabling the table whole hbase service went down. > the following is the master logs > {noformat} > 2016-11-09 19:32:55,102 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 disable testTable > 2016-11-09 19:32:55,257 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure DisableTableProcedure > (table=testTable) id=8 owner=seenu state=RUNNABLE:DISABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:32:55,264 DEBUG [ProcedureExecutor-5] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:165 > 2016-11-09 19:32:55,285 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,386 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,513 INFO [ProcedureExecutor-5] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLING to > DISABLING > 2016-11-09 19:32:55,587 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,628 INFO [ProcedureExecutor-5] > procedure.DisableTableProcedure: Offlining 1 regions. > . > . > . > . > . > . > . > . > 2016-11-09 19:33:02,871 INFO [AM.ZK.Worker-pool2-t7] master.RegionStates: > Offlined 1890fa9c085dcc2ee0602f4bab069d10 from host-1,16040,1478690163056 > Wed Nov 9 19:33:02 CST 2016 Terminating master > {noformat} > here we need to observe > {color:red} Offlined 1890fa9c085dcc2ee0602f4bab069d10 from > host-1,16040,1478690163056 {color} > then hmaster went down, all regionServers also made down. > After hmaster and regionserver are restarted > executed enable Table operation on the table. > {panel:title=HMaster > Logs|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > {noformat} > 2016-11-09 19:49:57,059 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 enable testTable > 2016-11-09 19:49:57,325 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure EnableTableProcedure > (table=testTable) id=9 owner=seenu state=RUNNABLE:ENABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:49:57,333 DEBUG [ProcedureExecutor-2] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:168 > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Connecting to host-1:16040 > 2016-11-09 19:49:57,347 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,449 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,579 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Attempting to enable the table testTable > 2016-11-09 19:49:57,580 INFO [ProcedureExecutor-2] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLED to > ENABLING > 2016-11-09 19:49:57,655 DEBUG >
[jira] [Commented] (HBASE-17055) Disabling table not getting enabled after clean cluster restart.
[ https://issues.apache.org/jira/browse/HBASE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653133#comment-15653133 ] Y. SREENIVASULU REDDY commented on HBASE-17055: --- yes. This problem is exist in all 1.x releases. > Disabling table not getting enabled after clean cluster restart. > > > Key: HBASE-17055 > URL: https://issues.apache.org/jira/browse/HBASE-17055 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.3.0 >Reporter: Y. SREENIVASULU REDDY >Assignee: Stephen Yuan Jiang > Fix For: 1.3.0 > > > scenario: > 1. Disable the table, while disabling the table is in progress. > 2. Restart whole HBase service. > 3. Then enable the table. > the above operation leads to RIT continously. > pls find the below logs for understanding. > while disabling the table whole hbase service went down. > the following is the master logs > {noformat} > 2016-11-09 19:32:55,102 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 disable testTable > 2016-11-09 19:32:55,257 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure DisableTableProcedure > (table=testTable) id=8 owner=seenu state=RUNNABLE:DISABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:32:55,264 DEBUG [ProcedureExecutor-5] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:165 > 2016-11-09 19:32:55,285 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,386 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,513 INFO [ProcedureExecutor-5] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLING to > DISABLING > 2016-11-09 19:32:55,587 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,628 INFO [ProcedureExecutor-5] > procedure.DisableTableProcedure: Offlining 1 regions. > . > . > . > . > . > . > . > . > 2016-11-09 19:33:02,871 INFO [AM.ZK.Worker-pool2-t7] master.RegionStates: > Offlined 1890fa9c085dcc2ee0602f4bab069d10 from host-1,16040,1478690163056 > Wed Nov 9 19:33:02 CST 2016 Terminating master > {noformat} > here we need to observe > {color:red} Offlined 1890fa9c085dcc2ee0602f4bab069d10 from > host-1,16040,1478690163056 {color} > then hmaster went down, all regionServers also made down. > After hmaster and regionserver are restarted > executed enable Table operation on the table. > {panel:title=HMaster > Logs|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > {noformat} > 2016-11-09 19:49:57,059 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 enable testTable > 2016-11-09 19:49:57,325 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure EnableTableProcedure > (table=testTable) id=9 owner=seenu state=RUNNABLE:ENABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:49:57,333 DEBUG [ProcedureExecutor-2] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:168 > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Connecting to host-1:16040 > 2016-11-09 19:49:57,347 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,449 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,579 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Attempting to enable the table testTable > 2016-11-09 19:49:57,580 INFO [ProcedureExecutor-2] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLED to > ENABLING > 2016-11-09 19:49:57,655 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,707 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Table 'testTable' has 1 regions, of which 1 > are offline. > 2016-11-09 19:49:57,707 INFO [ProcedureExecutor-2] >
[jira] [Commented] (HBASE-17055) Disabling table not getting enabled after clean cluster restart.
[ https://issues.apache.org/jira/browse/HBASE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653131#comment-15653131 ] Y. SREENIVASULU REDDY commented on HBASE-17055: --- [~syuanjiang] Thanks for verifying this issue. {quote} does the ServerShutdownHandler procedure ever run to move the regions of host-1 to other RS? {quote} Here case is whole HBase cluster is clean restart. smoothly disabled regions from {{host-1}} is enabled successfully to same RS or to the other RS. But the issue is happening in {{disabling}} region exited from the middle of the operation. only such a region is not able to enabled in restarted cluster. [~tedyu] yes. This problem is exist in all 1.x releases. > Disabling table not getting enabled after clean cluster restart. > > > Key: HBASE-17055 > URL: https://issues.apache.org/jira/browse/HBASE-17055 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.3.0 >Reporter: Y. SREENIVASULU REDDY >Assignee: Stephen Yuan Jiang > Fix For: 1.3.0 > > > scenario: > 1. Disable the table, while disabling the table is in progress. > 2. Restart whole HBase service. > 3. Then enable the table. > the above operation leads to RIT continously. > pls find the below logs for understanding. > while disabling the table whole hbase service went down. > the following is the master logs > {noformat} > 2016-11-09 19:32:55,102 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 disable testTable > 2016-11-09 19:32:55,257 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure DisableTableProcedure > (table=testTable) id=8 owner=seenu state=RUNNABLE:DISABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:32:55,264 DEBUG [ProcedureExecutor-5] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:165 > 2016-11-09 19:32:55,285 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,386 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,513 INFO [ProcedureExecutor-5] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLING to > DISABLING > 2016-11-09 19:32:55,587 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,628 INFO [ProcedureExecutor-5] > procedure.DisableTableProcedure: Offlining 1 regions. > . > . > . > . > . > . > . > . > 2016-11-09 19:33:02,871 INFO [AM.ZK.Worker-pool2-t7] master.RegionStates: > Offlined 1890fa9c085dcc2ee0602f4bab069d10 from host-1,16040,1478690163056 > Wed Nov 9 19:33:02 CST 2016 Terminating master > {noformat} > here we need to observe > {color:red} Offlined 1890fa9c085dcc2ee0602f4bab069d10 from > host-1,16040,1478690163056 {color} > then hmaster went down, all regionServers also made down. > After hmaster and regionserver are restarted > executed enable Table operation on the table. > {panel:title=HMaster > Logs|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > {noformat} > 2016-11-09 19:49:57,059 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 enable testTable > 2016-11-09 19:49:57,325 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure EnableTableProcedure > (table=testTable) id=9 owner=seenu state=RUNNABLE:ENABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:49:57,333 DEBUG [ProcedureExecutor-2] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:168 > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Connecting to host-1:16040 > 2016-11-09 19:49:57,347 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,449 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,579 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Attempting to enable the table testTable > 2016-11-09 19:49:57,580 INFO [ProcedureExecutor-2] >
[jira] [Commented] (HBASE-15786) Create Offheap Memstore
[ https://issues.apache.org/jira/browse/HBASE-15786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653107#comment-15653107 ] Anoop Sam John commented on HBASE-15786: Oh ya.. I have added high level description but forgot to change the title of the jira. It is not purely Off heap memstore. The memstore as such is same.. It is a way to make Off heap MSLAB pool. You can see that is the major change. The patch is big because I have made 2 renaming. HeapMemorySizeUtil -> MemorySizeUtil.Slight changes in methods here and some new methods. The diff is this calc the off heap memory related stuff also. The max off heap memory for global memstore. HeapMSLAB -> MSLABImpl. This is because this deal with MSLAB on heap/off heap now. Thought of making a new class OffheapMSLAB. The major change is when one say off heap MSLAB, we think the chunks allocated by it is always off heap. But we dont want create on demand DBBs which will NOT get pooled. Only those from the pool is DBB. When pool is not having free chunk, we will make new from MSLAB but that will be on heap in any case. Is it good to call that MSLAB as OffheapMALAB? I am thinking not. A cell is getting copied into the chunk area while adding to memstore. We have write() API in ExtendedCell. So the cell is responsible for writing itself into the MSLAB chunk area. Only diff is this was taking byte[] as the chunk backing was that. Now I changed chunk backing to be BB. So all the methods in CellUtil/KVUtil is copy of byte[] based methods.That is why confusing names. Did not do any renaming or refactor way. Instead of cell doing the write, we pull fragments of cell and write in KVUtil, this is for the Cells NOT of type ExtendedCell.. Actually speaking all cells in server side is supposed to be ExtendedCell type any way. Still for the completion of the impl, we consider other also. bq.AN HRegion needs reference to RegionServerServices memory management? This is existing ref. I did not add in this patch. bq.Should the DefaultHeapMemoryTuner be renamed now you've renamed the class as MemorySizeUtil? No. This tuner tune only Heap memory. When MSLAB pool is off heap we should not be tuning that area. That may be another tuner. As of now we will keep it simple. Though we need some changes. Ram is working on a patch for that. Will come soon. > Create Offheap Memstore > --- > > Key: HBASE-15786 > URL: https://issues.apache.org/jira/browse/HBASE-15786 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: ramkrishna.s.vasudevan >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15786.patch > > > We can make use of MSLAB pool for this off heap memstore. > Right now one can specify the global memstore size (heap size) as a % of max > memory using a config. We will add another config with which one can specify > the global off heap memstore size. This will be exact size not as %. When off > heap memstore in use, we will give this entire area for the MSLAB pool and > that will create off heap chunks. So when cells are added to memstore, the > cell data gets copied into the off heap MSLAB chunk spaces. Note that when > the pool size is not really enough and we need additional chunk creation, we > wont use off heap area for that. We dony want to create so many on demand > DBBs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-16985: --- Attachment: HBASE-16985-branch-1.1.patch > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.1.patch, > HBASE-16985-branch-1.2.patch, HBASE-16985-branch-1.patch, > HBASE-16985-branch-1.patch, HBASE-16985-v1.patch, HBASE-16985-v1.patch, > HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-16985: --- Status: Patch Available (was: Reopened) > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.2.patch, > HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, HBASE-16985-v1.patch, > HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-16985: --- Attachment: HBASE-16985-branch-1.2.patch > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.2.patch, > HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, HBASE-16985-v1.patch, > HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15786) Create Offheap Memstore
[ https://issues.apache.org/jira/browse/HBASE-15786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15653032#comment-15653032 ] stack commented on HBASE-15786: --- Is the subject correct? Maybe say more what the patch is doing. I did a first pass. When do I need this piecemeal copying? copyRowTo ... copying from a Cell to a ByteBuffer? Can I not use the original Cell instead of the new ByteBuffer? We've talked about this before, we don't want the Cell doing its own serialization? In appendKeyTo over in KVUtil, there is an external dependency on a particular structure. It makes it hard ever changing the Cell layout when it is in more than just one place. We have appendKeyTo which appends to a ByteBuffer? And then appendToByteBuffer which appends to a ByteBuffer. Why we have to say ByteBuffer in latter case but not in the first? Later we have copyCellTo... Sometimes we are saying ByteBuffered as in ByteBufferedCell and now we are starting to say ByteBuffer instead as in ByteBufferWriter or ByteBufferUtil MemorySizeUtil needs class comment explaining it. Don't you have a nice write up on how this thing operators that you could copy/paste in here? We need to work on moving mslab and chunk etc. out to a memory module? Move things like MemStoreLABImpl out of regionserver package? Shouldn't we scream when mslab is not on now? 1468if (conf.getBoolean(MemStoreLAB.USEMSLAB_KEY, MemStoreLAB.USEMSLAB_DEFAULT)) { It means you are missing out on a bunch of stuff? Should the DefaultHeapMemoryTuner be renamed now you've renamed the class as MemorySizeUtil? Say more on what this patch is doing? AN HRegion needs reference to RegionServerServices memory management? > Create Offheap Memstore > --- > > Key: HBASE-15786 > URL: https://issues.apache.org/jira/browse/HBASE-15786 > Project: HBase > Issue Type: Sub-task > Components: regionserver >Reporter: ramkrishna.s.vasudevan >Assignee: Anoop Sam John > Fix For: 2.0.0 > > Attachments: HBASE-15786.patch > > > We can make use of MSLAB pool for this off heap memstore. > Right now one can specify the global memstore size (heap size) as a % of max > memory using a config. We will add another config with which one can specify > the global off heap memstore size. This will be exact size not as %. When off > heap memstore in use, we will give this entire area for the MSLAB pool and > that will create off heap chunks. So when cells are added to memstore, the > cell data gets copied into the off heap MSLAB chunk spaces. Note that when > the pool size is not really enough and we need additional chunk creation, we > wont use off heap area for that. We dony want to create so many on demand > DBBs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17060) backport HBASE-16570 to 1.3.1
[ https://issues.apache.org/jira/browse/HBASE-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated HBASE-17060: -- Assignee: binlijin (was: Yu Li) > backport HBASE-16570 to 1.3.1 > - > > Key: HBASE-17060 > URL: https://issues.apache.org/jira/browse/HBASE-17060 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.3.0 >Reporter: Yu Li >Assignee: binlijin > > Need some backport after 1.3.0 got released -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-17060) backport HBASE-16570 to 1.3.1
[ https://issues.apache.org/jira/browse/HBASE-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li reassigned HBASE-17060: - Assignee: Yu Li (was: binlijin) > backport HBASE-16570 to 1.3.1 > - > > Key: HBASE-17060 > URL: https://issues.apache.org/jira/browse/HBASE-17060 > Project: HBase > Issue Type: Sub-task >Affects Versions: 1.3.0 >Reporter: Yu Li >Assignee: Yu Li > > Need some backport after 1.3.0 got released -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16985: -- Fix Version/s: 1.1.8 1.2.5 1.3.1 > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16985: -- Affects Version/s: 1.0.0 Component/s: test > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652976#comment-15652976 ] stack commented on HBASE-16985: --- I tried backporting to 1.2 and 1.1 but weird init latching stuff is in the way; backport fails. Will ait on all clear for 1.3.1 before putting it there. > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.0.0 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16570) Compute region locality in parallel at startup
[ https://issues.apache.org/jira/browse/HBASE-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652964#comment-15652964 ] Yu Li commented on HBASE-16570: --- See HBASE-17060 for 1.3.1 > Compute region locality in parallel at startup > -- > > Key: HBASE-16570 > URL: https://issues.apache.org/jira/browse/HBASE-16570 > Project: HBase > Issue Type: Sub-task >Reporter: binlijin >Assignee: binlijin > Fix For: 2.0.0, 1.4.0, 1.3.1 > > Attachments: HBASE-16570-master_V1.patch, > HBASE-16570-master_V2.patch, HBASE-16570-master_V3.patch, > HBASE-16570-master_V4.patch, HBASE-16570.branch-1.3-addendum.patch, > HBASE-16570_addnum.patch, HBASE-16570_addnum_v2.patch, > HBASE-16570_addnum_v3.patch, HBASE-16570_addnum_v4.patch, > HBASE-16570_addnum_v5.patch, HBASE-16570_addnum_v6.patch, > HBASE-16570_addnum_v7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack reopened HBASE-16985: --- Reopen to backport. > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17060) backport HBASE-16570 to 1.3.1
Yu Li created HBASE-17060: - Summary: backport HBASE-16570 to 1.3.1 Key: HBASE-17060 URL: https://issues.apache.org/jira/browse/HBASE-17060 Project: HBase Issue Type: Sub-task Affects Versions: 1.3.0 Reporter: Yu Li Assignee: binlijin Need some backport after 1.3.0 got released -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16570) Compute region locality in parallel at startup
[ https://issues.apache.org/jira/browse/HBASE-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652957#comment-15652957 ] Yu Li commented on HBASE-16570: --- Thanks for review [~ghelmling], will commit in one day if no objections. And I think this should be an addendum for master branch and a completely new patch for 1.3.1 since already reverted for branch-1.3, right? Will create a sub-task targeting for 1.3.1 later and please upload the new patch there [~aoxiang] > Compute region locality in parallel at startup > -- > > Key: HBASE-16570 > URL: https://issues.apache.org/jira/browse/HBASE-16570 > Project: HBase > Issue Type: Sub-task >Reporter: binlijin >Assignee: binlijin > Fix For: 2.0.0, 1.4.0, 1.3.1 > > Attachments: HBASE-16570-master_V1.patch, > HBASE-16570-master_V2.patch, HBASE-16570-master_V3.patch, > HBASE-16570-master_V4.patch, HBASE-16570.branch-1.3-addendum.patch, > HBASE-16570_addnum.patch, HBASE-16570_addnum_v2.patch, > HBASE-16570_addnum_v3.patch, HBASE-16570_addnum_v4.patch, > HBASE-16570_addnum_v5.patch, HBASE-16570_addnum_v6.patch, > HBASE-16570_addnum_v7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split
[ https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652933#comment-15652933 ] Yu Li commented on HBASE-15324: --- Yep, both question and answer here are reasonable, and maybe we could simply use {{jitterRate > 0}} to leave the check to JDK. Below is a simple test to confirm JDK could make a good check: {code} double x = 1e-200; double y = -1e-200; System.out.println(x>0 && y<0); {code} Thanks for committing this to branch-1.1/1.2 and opening the new issue [~esteban]. [~huaxiang] feel free to take the new JIRA if you'd like to, or I could take that if you prefer me to, just let me know (Smile). > Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy > and trigger unexpected split > -- > > Key: HBASE-15324 > URL: https://issues.apache.org/jira/browse/HBASE-15324 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.1.3 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8 > > Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, > HBASE-15324_v3.patch, HBASE-15324_v3.patch > > > We introduce jitter for region split decision in HBASE-13412, but the > following line in {{ConstantSizeRegionSplitPolicy}} may cause long value > overflow if MAX_FILESIZE is specified to Long.MAX_VALUE: > {code} > this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - > 0.5D) * jitter); > {code} > In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target > region to split. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16700) Allow for coprocessor whitelisting
[ https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652928#comment-15652928 ] Hadoop QA commented on HBASE-16700: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 30m 30s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 37s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 149m 50s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.namespace.TestNamespaceAuditor | | | org.apache.hadoop.hbase.wal.TestWALSplitCompressed | | | org.apache.hadoop.hbase.master.TestTableLockManager | | | org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer | | | org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes | | | org.apache.hadoop.hbase.wal.TestWALSplit | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.2 Server=1.12.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838267/HBASE-16700.001.patch | | JIRA Issue | HBASE-16700 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux d8639b3a72a2 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 8192a6b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/4414/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/4414/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4414/testReport/ | | modules | C: hbase-server U: hbase-server | |
[jira] [Updated] (HBASE-17039) SimpleLoadBalancer schedules large amount of invalid region moves
[ https://issues.apache.org/jira/browse/HBASE-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li updated HBASE-17039: -- Fix Version/s: 1.3.1 Create sub-task for backporting to 1.3.1 and update fix version relatively. Leave this JIRA open until sub-task done. > SimpleLoadBalancer schedules large amount of invalid region moves > - > > Key: HBASE-17039 > URL: https://issues.apache.org/jira/browse/HBASE-17039 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 2.0.0, 1.3.0, 1.1.7, 1.2.4 >Reporter: Charlie Qiangeng Xu >Assignee: Charlie Qiangeng Xu > Fix For: 2.0.0, 1.4.0, 1.3.1, 1.2.5, 1.1.8 > > Attachments: HBASE-17039.patch > > > After increasing one of our clusters to 1600 nodes, we observed a large > amount of invalid region moves(more than 30k moves) fired by the balance > chore. Thus we simulated the problem and printed out the balance plan, only > to find out many servers that had two regions for a certain table(we use by > table strategy), sent out both regions to other two servers that have zero > region. > In the SimpleLoadBalancer's balanceCluster function, > the code block that determines the underLoadedServers might have a problem: > {code} > if (load >= min && load > 0) { > continue; // look for other servers which haven't reached min > } > int regionsToPut = min - load; > if (regionsToPut == 0) > { > regionsToPut = 1; > } > {code} > if min is zero, some server that has load of zero, which equals to min would > be marked as underloaded, which would cause the phenomenon mentioned above. > Since we increased the cluster's size to 1600+, many tables that only have > 1000 regions, now would encounter such issue. > By fixing it up, the balance plan went back to normal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17059) backport HBASE-17039 to 1.3.1
Yu Li created HBASE-17059: - Summary: backport HBASE-17039 to 1.3.1 Key: HBASE-17059 URL: https://issues.apache.org/jira/browse/HBASE-17059 Project: HBase Issue Type: Sub-task Reporter: Yu Li Assignee: Yu Li Currently branch-1.3 codes are freezing for 1.3.0 release, need to backport HBASE-17039 to 1.3.1 afterwards. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-16838: -- Description: Implement a scan works like the grpc streaming call that all returned results will be passed to a ScanConsumer. The methods of the consumer will be called directly in the rpc framework threads so it is not allowed to do time consuming work in the methods. So in general only experts or the implementation of other methods in AsyncTable can call this method directly, that's why I call it 'basic scan'. (was: Implement a scan works like the grpc streaming call that all returned results will be passed to a ScanObserver. The methods of the observer will be called directly in the rpc framework threads so it is not allowed to do time consuming work in the methods. So in general only experts or the implementation of other methods in AsyncTable can call this method directly, that's why I call it 'basic scan'.) > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanConsumer. The methods of the consumer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17039) SimpleLoadBalancer schedules large amount of invalid region moves
[ https://issues.apache.org/jira/browse/HBASE-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652912#comment-15652912 ] Yu Li commented on HBASE-17039: --- Sure, thanks for check and confirm boss. > SimpleLoadBalancer schedules large amount of invalid region moves > - > > Key: HBASE-17039 > URL: https://issues.apache.org/jira/browse/HBASE-17039 > Project: HBase > Issue Type: Bug > Components: Balancer >Affects Versions: 2.0.0, 1.3.0, 1.1.7, 1.2.4 >Reporter: Charlie Qiangeng Xu >Assignee: Charlie Qiangeng Xu > Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8 > > Attachments: HBASE-17039.patch > > > After increasing one of our clusters to 1600 nodes, we observed a large > amount of invalid region moves(more than 30k moves) fired by the balance > chore. Thus we simulated the problem and printed out the balance plan, only > to find out many servers that had two regions for a certain table(we use by > table strategy), sent out both regions to other two servers that have zero > region. > In the SimpleLoadBalancer's balanceCluster function, > the code block that determines the underLoadedServers might have a problem: > {code} > if (load >= min && load > 0) { > continue; // look for other servers which haven't reached min > } > int regionsToPut = min - load; > if (regionsToPut == 0) > { > regionsToPut = 1; > } > {code} > if min is zero, some server that has load of zero, which equals to min would > be marked as underloaded, which would cause the phenomenon mentioned above. > Since we increased the cluster's size to 1600+, many tables that only have > 1000 regions, now would encounter such issue. > By fixing it up, the balance plan went back to normal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16838) Implement basic scan
[ https://issues.apache.org/jira/browse/HBASE-16838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652911#comment-15652911 ] Yu Li commented on HBASE-16838: --- +1, patch v3 lgtm. Thanks. > Implement basic scan > > > Key: HBASE-16838 > URL: https://issues.apache.org/jira/browse/HBASE-16838 > Project: HBase > Issue Type: Sub-task >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-16838-v1.patch, HBASE-16838-v2.patch, > HBASE-16838-v3.patch, HBASE-16838.patch > > > Implement a scan works like the grpc streaming call that all returned results > will be passed to a ScanObserver. The methods of the observer will be called > directly in the rpc framework threads so it is not allowed to do time > consuming work in the methods. So in general only experts or the > implementation of other methods in AsyncTable can call this method directly, > that's why I call it 'basic scan'. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652884#comment-15652884 ] stack commented on HBASE-16993: --- Findbugs above are from the com.google.protobuf that we have checked in. Need to exclude. Here is whitespace complaint, from pb: ./hbase-protocol-shaded/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/generated/BucketCacheProtos.java:527: private static final The unit test fail is compile dependency on hbase-protocol-shaded. Ditto javadoc. Let me commit. Thanks reviews and +1s > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at >
[jira] [Commented] (HBASE-17052) compile-protobuf profile does not compile protobufs in some modules anymore
[ https://issues.apache.org/jira/browse/HBASE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652848#comment-15652848 ] Hadoop QA commented on HBASE-17052: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 15m 16s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 41s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 15m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 25m 6s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s {color} | {color:green} hbase-rsgroup in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s {color} | {color:green} hbase-endpoint in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s {color} | {color:green} hbase-examples in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 53s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 3s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 179m 13s {color} | {color:black} {color} | \\ \\ || Reason || Tests
[jira] [Updated] (HBASE-17053) Remove LogRollerExitedChecker
[ https://issues.apache.org/jira/browse/HBASE-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-17053: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Pushed to master. Thanks [~ram_krish] for reviewing. > Remove LogRollerExitedChecker > - > > Key: HBASE-17053 > URL: https://issues.apache.org/jira/browse/HBASE-17053 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17053-v1.patch, HBASE-17053-v2.patch, > HBASE-17053.patch > > > Now the LogRoll may exit before WAL exit but we will still write some close > event to WAL when shutting down RS. And for AsyncFSWAL, we will open a new > wal writer when error occur. If LogRoller has already exited then AsyncFSWAL > will wait for ever and cause RS to hang. > It does not make sense to quit LogRoller ahead of WAL, this jira aims to > change the shutdown order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17056) Remove checked in PB generated files
[ https://issues.apache.org/jira/browse/HBASE-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652826#comment-15652826 ] Duo Zhang commented on HBASE-17056: --- +1. > Remove checked in PB generated files > - > > Key: HBASE-17056 > URL: https://issues.apache.org/jira/browse/HBASE-17056 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar > Fix For: 2.0.0 > > > Now that we have the new PB maven plugin, there is no need to have the PB > files checked in to the repo. The reason we did that was to ease up developer > env setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17053) Remove LogRollerExitedChecker
[ https://issues.apache.org/jira/browse/HBASE-17053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652823#comment-15652823 ] Duo Zhang commented on HBASE-17053: --- The faling is because surefire plugin OOM. I do not think it is related to the patch here as other precommi build could also OOM... https://builds.apache.org/job/PreCommit-HBASE-Build/4412/artifact/patchprocess/patch-unit-root.txt Will commit shortly. > Remove LogRollerExitedChecker > - > > Key: HBASE-17053 > URL: https://issues.apache.org/jira/browse/HBASE-17053 > Project: HBase > Issue Type: Sub-task > Components: wal >Affects Versions: 2.0.0 >Reporter: Duo Zhang >Assignee: Duo Zhang > Fix For: 2.0.0 > > Attachments: HBASE-17053-v1.patch, HBASE-17053-v2.patch, > HBASE-17053.patch > > > Now the LogRoll may exit before WAL exit but we will still write some close > event to WAL when shutting down RS. And for AsyncFSWAL, we will open a new > wal writer when error occur. If LogRoller has already exited then AsyncFSWAL > will wait for ever and cause RS to hang. > It does not make sense to quit LogRoller ahead of WAL, this jira aims to > change the shutdown order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15324) Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split
[ https://issues.apache.org/jira/browse/HBASE-15324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652810#comment-15652810 ] Esteban Gutierrez commented on HBASE-15324: --- [~huaxiang] I think the problem is the value of the epsilon used for the precision of the types involved (float x double). I think it should be at least 2.22e-16 (2^-52) or even 1.11e-16 (2^-53). Created HBASE-17058 for follow up. Thanks. > Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy > and trigger unexpected split > -- > > Key: HBASE-15324 > URL: https://issues.apache.org/jira/browse/HBASE-15324 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0, 1.1.3 >Reporter: Yu Li >Assignee: Yu Li > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8 > > Attachments: HBASE-15324.patch, HBASE-15324_v2.patch, > HBASE-15324_v3.patch, HBASE-15324_v3.patch > > > We introduce jitter for region split decision in HBASE-13412, but the > following line in {{ConstantSizeRegionSplitPolicy}} may cause long value > overflow if MAX_FILESIZE is specified to Long.MAX_VALUE: > {code} > this.desiredMaxFileSize += (long)(desiredMaxFileSize * (RANDOM.nextFloat() - > 0.5D) * jitter); > {code} > In our case we specify MAX_FILESIZE to Long.MAX_VALUE to prevent target > region to split. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-17058) Lower epsilon used for jitter verification from HBASE-15324
Esteban Gutierrez created HBASE-17058: - Summary: Lower epsilon used for jitter verification from HBASE-15324 Key: HBASE-17058 URL: https://issues.apache.org/jira/browse/HBASE-17058 Project: HBase Issue Type: Bug Components: Compaction Affects Versions: 1.2.4, 1.1.7, 2.0.0, 1.3.0, 1.4.0 Reporter: Esteban Gutierrez The current epsilon used is 1E-6 and its too big it might overflow the desiredMaxFileSize. A trivial fix is to lower the epsilon to 2^-52 or even 2^-53. An option to consider too is just to shift the jitter to always decrement hbase.hregion.max.filesize (MAX_FILESIZE) instead of increase the size of the region and having to deal with the round off. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17052) compile-protobuf profile does not compile protobufs in some modules anymore
[ https://issues.apache.org/jira/browse/HBASE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652807#comment-15652807 ] Hadoop QA commented on HBASE-17052: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 22s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 15m 50s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 41s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s {color} | {color:red} hbase-examples in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 2m 43s {color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 9s {color} | {color:red} hbase-examples in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 20s {color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 9s {color} | {color:red} hbase-examples in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 20s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 15m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 37s {color} | {color:red} The patch causes 81 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 9s {color} | {color:red} The patch causes 81 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 43s {color} | {color:red} The patch causes 81 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 15s {color} | {color:red} The patch causes 81 errors with Hadoop v2.6.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 48s {color} | {color:red} The patch causes 81 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 20s {color} | {color:red} The patch causes 81 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 56s {color} | {color:red} The patch causes 81 errors with Hadoop v2.7.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 33s {color} | {color:red} The patch causes 81 errors with Hadoop
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652801#comment-15652801 ] liubangchen commented on HBASE-16993: - I'm ok for this,thank you sir. > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6537) > at >
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652774#comment-15652774 ] Hadoop QA commented on HBASE-16993: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 42s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 21s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 24s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 59s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 21s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 29m 56s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 30s {color} | {color:red} Patch generated 1 new protoc errors in hbase-server. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 46s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s {color} | {color:red} hbase-server generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 55s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.2 Server=1.12.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838270/HBASE-16993.master.003.patch | | JIRA Issue | HBASE-16993 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile cc hbaseprotoc xml | | uname | Linux 9f107f0bc340 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven |
[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652733#comment-15652733 ] Hadoop QA commented on HBASE-14123: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s {color} | {color:blue} Docker mode activated. {color} | | {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 1s {color} | {color:blue} The patch file was not named according to hbase's naming conventions. Please see https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for instructions. {color} | | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s {color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 47 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 19s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 38s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 28s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 58s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 20s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 5s {color} | {color:green} There were no new shellcheck issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 27s {color} | {color:red} Patch generated 1 new protoc errors in hbase-server. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 2m 18s {color} | {color:red} Patch generated 1 new protoc errors in .. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s {color} | {color:blue} Skipped patched modules with no Java source: . {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 55s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652711#comment-15652711 ] Thiruvel Thirumoolan commented on HBASE-16169: -- Fixed whitespace issues in protobuf generated code. The rest of the issues don't seem related to this patch. > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch, HBASE-16169.master.006.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652690#comment-15652690 ] Guanghao Zhang commented on HBASE-16985: This problem existed since HBASE-10569. > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652687#comment-15652687 ] Guanghao Zhang commented on HBASE-16985: Thanks [~stack]. Did this need be fixed on branch-1.3, 1.2 and 1.1? > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables
[ https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652672#comment-15652672 ] Vladimir Rodionov commented on HBASE-14141: --- {quote} We also have to think about the failure case where a WAL will be left un-closed in case of RS dead. We cannot rely on a mechanism to write data in WAL close because it will never be reliable. Even if we do a solution where we keep track of Tables/Regions in the WAL and retroactively write this info to the backup metadata, we have to design the system so that WALs from RS failures are handled. {quote} We already depend on WALs for incremental backup. If WAL is unreliable so is HBase. Backup can't be more reliable than WAL/HBase. {quote} Let's say I have a single huge table in the cluster, and a single backup set. This means that we cannot use multi-wal at all, making the design decision a non-starter. {quote} No. In this case, default mode is the way to go (what we have now). > HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits > from backup tables > > > Key: HBASE-14141 > URL: https://issues.apache.org/jira/browse/HBASE-14141 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Labels: backup > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Attachment: HBASE-16169.master.006.patch > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch, HBASE-16169.master.006.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16570) Compute region locality in parallel at startup
[ https://issues.apache.org/jira/browse/HBASE-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652652#comment-15652652 ] Gary Helmling commented on HBASE-16570: --- +1 on addendum v7. > Compute region locality in parallel at startup > -- > > Key: HBASE-16570 > URL: https://issues.apache.org/jira/browse/HBASE-16570 > Project: HBase > Issue Type: Sub-task >Reporter: binlijin >Assignee: binlijin > Fix For: 2.0.0, 1.4.0, 1.3.1 > > Attachments: HBASE-16570-master_V1.patch, > HBASE-16570-master_V2.patch, HBASE-16570-master_V3.patch, > HBASE-16570-master_V4.patch, HBASE-16570.branch-1.3-addendum.patch, > HBASE-16570_addnum.patch, HBASE-16570_addnum_v2.patch, > HBASE-16570_addnum_v3.patch, HBASE-16570_addnum_v4.patch, > HBASE-16570_addnum_v5.patch, HBASE-16570_addnum_v6.patch, > HBASE-16570_addnum_v7.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16985) TestClusterId failed due to wrong hbase rootdir
[ https://issues.apache.org/jira/browse/HBASE-16985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16985: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 1.4.0 2.0.0 Status: Resolved (was: Patch Available) Pushed to branch-1 and to master. Added in the optional setting of root dir only if different from master to branch-1. Thanks for the patch [~zghaobac] > TestClusterId failed due to wrong hbase rootdir > --- > > Key: HBASE-16985 > URL: https://issues.apache.org/jira/browse/HBASE-16985 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Minor > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16985-branch-1.patch, HBASE-16985-branch-1.patch, > HBASE-16985-v1.patch, HBASE-16985-v1.patch, HBASE-16985.patch > > > https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.regionserver/TestClusterId/testClusterId/ > {code} > java.io.IOException: Shutting down > at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:230) > at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:409) > at > org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:227) > at > org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:96) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1071) > at > org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1037) > at > org.apache.hadoop.hbase.regionserver.TestClusterId.testClusterId(TestClusterId.java:85) > {code} > The cluster can not start up because there are no active master. The active > master can not finish initialing because the hbase:namespace region can not > be assign. > In TestClusterId unit test, TEST_UTIL.startMiniHBaseCluster set new hbase > root dir. But the regionserver thread which stared first used a different > hbase root dir. If assign hbase:namespace region to this regionserver, the > region can not be assigned because there are no tableinfo on wrong hbase root > dir. > When regionserver report to master, it will get back some new config. But the > FSTableDescriptors has been initialed so it's root dir didn't changed. > {code} > if (LOG.isDebugEnabled()) { > LOG.info("Config from master: " + key + "=" + value); > } > {code} > I thought FSTableDescriptors need update the rootdir when regionserver get > report from master. > The master branch has same problem, too. But the balancer always assign > hbase:namesapce region to master. So this unit test can passed on master > branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16993: -- Attachment: HBASE-16993.master.003.patch > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6537) > at >
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652568#comment-15652568 ] stack commented on HBASE-16993: --- The junit and javadoc are because hbase-server doesn't explicitly depend on hbase-protocol-shaded. Fixed the findbugs and whilte space (though one white space is about generated file... not for fix). > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch, > HBASE-16993.master.003.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at
[jira] [Updated] (HBASE-16700) Allow for coprocessor whitelisting
[ https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clay B. updated HBASE-16700: Attachment: HBASE-16700.001.patch Thanks Ted! Including InterfaceAudience; moving the configuration getter out of the for loop; adding a test classification; correcting the LogFactory class; removing trailing white space. Two things I noticed: correcting to use a String() instead of Path() for the success log message and removing extraneous log message "Checking coprocessor %s". > Allow for coprocessor whitelisting > -- > > Key: HBASE-16700 > URL: https://issues.apache.org/jira/browse/HBASE-16700 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Clay B. >Priority: Minor > Labels: security > Attachments: HBASE-16700.000.patch, HBASE-16700.001.patch > > > Today one can turn off all non-system coprocessors with > {{hbase.coprocessor.user.enabled}} however, this disables very useful things > like Apache Phoenix's coprocessors. Some tenants of a multi-user HBase may > also need to run bespoke coprocessors. But as an operator I would not want > wanton coprocessor usage. Ideally, one could do one of two things: > * Allow coprocessors defined in {{hbase-site.xml}} -- this can only be > administratively changed in most cases > * Allow coprocessors from table descriptors but only if the coprocessor is > whitelisted -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables
[ https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652509#comment-15652509 ] Enis Soztutar commented on HBASE-14141: --- I think it is not a good idea to permanently couple the backup-set with the WAL grouping. These are orthogonal concerns, multi-wal is mainly used for performance and using more disks and should not be bound by how backup sets are defined. Let's say I have a single huge table in the cluster, and a single backup set. This means that we cannot use multi-wal at all, making the design decision a non-starter. We also have to think about the failure case where a WAL will be left un-closed in case of RS dead. We cannot rely on a mechanism to write data in WAL close because it will never be reliable. Even if we do a solution where we keep track of Tables/Regions in the WAL and retroactively write this info to the backup metadata, we have to design the system so that WALs from RS failures are handled. > HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits > from backup tables > > > Key: HBASE-14141 > URL: https://issues.apache.org/jira/browse/HBASE-14141 > Project: HBase > Issue Type: New Feature >Affects Versions: 2.0.0 >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Labels: backup > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652492#comment-15652492 ] Hadoop QA commented on HBASE-16169: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 37s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 50s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 10 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 37m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 28s {color} | {color:red} Patch generated 1 new protoc errors in hbase-server. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 4s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s {color} | {color:red} hbase-client generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 26s {color} | {color:red} hbase-server generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 13s {color} | {color:red} hbase-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 15s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838244/HBASE-16169.master.005.patch | | JIRA Issue | HBASE-16169 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile
[jira] [Updated] (HBASE-17052) compile-protobuf profile does not compile protobufs in some modules anymore
[ https://issues.apache.org/jira/browse/HBASE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-17052: -- Attachment: hbase-17052_v2.patch > compile-protobuf profile does not compile protobufs in some modules anymore > --- > > Key: HBASE-17052 > URL: https://issues.apache.org/jira/browse/HBASE-17052 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-17052_v1.patch, hbase-17052_v2.patch > > > Due to recent changes, we are not compiling the protobuf files in > hbase-endpoint, hbase-rsgroup, etc anymore. > {code} > [INFO] --- protobuf-maven-plugin:0.5.0:compile (compile-protoc) @ > hbase-rsgroup --- > [INFO] > /Users/enis/projects/hbase-sal/hbase-rsgroup/src/main/protobuf/,/Users/enis/projects/hbase-sal/hbase-rsgroup/../hbase-protocol/src/main/protobuf > does not exist. Review the configuration or consider disabling the plugin. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17052) compile-protobuf profile does not compile protobufs in some modules anymore
[ https://issues.apache.org/jira/browse/HBASE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-17052: -- Status: Patch Available (was: Open) > compile-protobuf profile does not compile protobufs in some modules anymore > --- > > Key: HBASE-17052 > URL: https://issues.apache.org/jira/browse/HBASE-17052 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-17052_v1.patch > > > Due to recent changes, we are not compiling the protobuf files in > hbase-endpoint, hbase-rsgroup, etc anymore. > {code} > [INFO] --- protobuf-maven-plugin:0.5.0:compile (compile-protoc) @ > hbase-rsgroup --- > [INFO] > /Users/enis/projects/hbase-sal/hbase-rsgroup/src/main/protobuf/,/Users/enis/projects/hbase-sal/hbase-rsgroup/../hbase-protocol/src/main/protobuf > does not exist. Review the configuration or consider disabling the plugin. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-17052) compile-protobuf profile does not compile protobufs in some modules anymore
[ https://issues.apache.org/jira/browse/HBASE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Enis Soztutar updated HBASE-17052: -- Attachment: hbase-17052_v1.patch > compile-protobuf profile does not compile protobufs in some modules anymore > --- > > Key: HBASE-17052 > URL: https://issues.apache.org/jira/browse/HBASE-17052 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Assignee: Enis Soztutar > Fix For: 2.0.0 > > Attachments: hbase-17052_v1.patch > > > Due to recent changes, we are not compiling the protobuf files in > hbase-endpoint, hbase-rsgroup, etc anymore. > {code} > [INFO] --- protobuf-maven-plugin:0.5.0:compile (compile-protoc) @ > hbase-rsgroup --- > [INFO] > /Users/enis/projects/hbase-sal/hbase-rsgroup/src/main/protobuf/,/Users/enis/projects/hbase-sal/hbase-rsgroup/../hbase-protocol/src/main/protobuf > does not exist. Review the configuration or consider disabling the plugin. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652334#comment-15652334 ] Vladimir Rodionov edited comment on HBASE-14123 at 11/9/16 11:51 PM: - v36. Rebased to master. cc: [~saint@gmail.com] was (Author: vrodionov): v36. Rebased to master. > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v5.txt, > 14123-master.v6.txt, 14123-master.v7.txt, 14123-master.v8.txt, > 14123-master.v9.txt, 14123-v14.txt, HBASE-14123-for-7912-v1.patch, > HBASE-14123-for-7912-v6.patch, HBASE-14123-v1.patch, HBASE-14123-v10.patch, > HBASE-14123-v11.patch, HBASE-14123-v12.patch, HBASE-14123-v13.patch, > HBASE-14123-v15.patch, HBASE-14123-v16.patch, HBASE-14123-v2.patch, > HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, > HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652416#comment-15652416 ] Hadoop QA commented on HBASE-16993: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s {color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s {color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 7m 1s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 23s {color} | {color:red} Patch generated 1 new protoc errors in hbase-server. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 49s {color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s {color} | {color:red} hbase-server generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 30s {color} | {color:green} hbase-protocol-shaded in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 24s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 6s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hbase-server | | | org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.retrieveFromFile(int[]) ignores result of java.io.FileInputStream.read(byte[], int, int) At BucketCache.java:int, int) At BucketCache.java:[line 1008] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838246/HBASE-16993.master.002.patch | | JIRA Issue | HBASE-16993 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile cc
[jira] [Commented] (HBASE-16700) Allow for coprocessor whitelisting
[ https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652405#comment-15652405 ] Hadoop QA commented on HBASE-16700: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 30s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 16s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 13s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hbase.TestCheckTestClasses | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838245/HBASE-16700.000.patch | | JIRA Issue | HBASE-16700 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 11c650d92609 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 287358b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | https://builds.apache.org/job/PreCommit-HBASE-Build/4408/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/4408/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/4408/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4408/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/4408/console | | Powered by | Apache Yetus 0.3.0 http://yetus.apache.org | This message was automatically generated. > Allow
[jira] [Commented] (HBASE-16956) Refactor FavoredNodePlan to use regionNames as keys
[ https://issues.apache.org/jira/browse/HBASE-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652390#comment-15652390 ] Thiruvel Thirumoolan commented on HBASE-16956: -- [~devaraj], Not sure why it took a week to run precommit builds for this one. Let me know if you have any concerns on this one. Thanks! > Refactor FavoredNodePlan to use regionNames as keys > --- > > Key: HBASE-16956 > URL: https://issues.apache.org/jira/browse/HBASE-16956 > Project: HBase > Issue Type: Sub-task >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16956.master.001.patch, HBASE-16956.master.002.patch, > HBASE-16956.master.003.patch, HBASE-16956.master.004.patch, > HBASE-16956.master.005.patch, HBASE-16956.master.006.patch > > > We would like to rely on the FNPlan cache whether a region is offline or not. > Sticking to regionNames as keys makes that possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HBASE-17055) Disabling table not getting enabled after clean cluster restart.
[ https://issues.apache.org/jira/browse/HBASE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Yuan Jiang reassigned HBASE-17055: -- Assignee: Stephen Yuan Jiang > Disabling table not getting enabled after clean cluster restart. > > > Key: HBASE-17055 > URL: https://issues.apache.org/jira/browse/HBASE-17055 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.3.0 >Reporter: Y. SREENIVASULU REDDY >Assignee: Stephen Yuan Jiang > Fix For: 1.3.0 > > > scenario: > 1. Disable the table, while disabling the table is in progress. > 2. Restart whole HBase service. > 3. Then enable the table. > the above operation leads to RIT continously. > pls find the below logs for understanding. > while disabling the table whole hbase service went down. > the following is the master logs > {noformat} > 2016-11-09 19:32:55,102 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 disable testTable > 2016-11-09 19:32:55,257 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure DisableTableProcedure > (table=testTable) id=8 owner=seenu state=RUNNABLE:DISABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:32:55,264 DEBUG [ProcedureExecutor-5] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:165 > 2016-11-09 19:32:55,285 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,386 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,513 INFO [ProcedureExecutor-5] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLING to > DISABLING > 2016-11-09 19:32:55,587 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,628 INFO [ProcedureExecutor-5] > procedure.DisableTableProcedure: Offlining 1 regions. > . > . > . > . > . > . > . > . > 2016-11-09 19:33:02,871 INFO [AM.ZK.Worker-pool2-t7] master.RegionStates: > Offlined 1890fa9c085dcc2ee0602f4bab069d10 from host-1,16040,1478690163056 > Wed Nov 9 19:33:02 CST 2016 Terminating master > {noformat} > here we need to observe > {color:red} Offlined 1890fa9c085dcc2ee0602f4bab069d10 from > host-1,16040,1478690163056 {color} > then hmaster went down, all regionServers also made down. > After hmaster and regionserver are restarted > executed enable Table operation on the table. > {panel:title=HMaster > Logs|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > {noformat} > 2016-11-09 19:49:57,059 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 enable testTable > 2016-11-09 19:49:57,325 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure EnableTableProcedure > (table=testTable) id=9 owner=seenu state=RUNNABLE:ENABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:49:57,333 DEBUG [ProcedureExecutor-2] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:168 > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Connecting to host-1:16040 > 2016-11-09 19:49:57,347 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,449 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,579 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Attempting to enable the table testTable > 2016-11-09 19:49:57,580 INFO [ProcedureExecutor-2] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLED to > ENABLING > 2016-11-09 19:49:57,655 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=9 > 2016-11-09 19:49:57,707 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Table 'testTable' has 1 regions, of which 1 > are offline. > 2016-11-09 19:49:57,707 INFO [ProcedureExecutor-2] > procedure.EnableTableProcedure: Bulk assigning 1 region(s) across 1 >
[jira] [Comment Edited] (HBASE-17055) Disabling table not getting enabled after clean cluster restart.
[ https://issues.apache.org/jira/browse/HBASE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652370#comment-15652370 ] Stephen Yuan Jiang edited comment on HBASE-17055 at 11/9/16 11:29 PM: -- The title "Disabling table not getting enabled after clean cluster restart." is misleading. The table did get enabled; just the one region of the table is still offline. For EnableTable operation, the logic does not change in all 1.x releases. It is best effort to online region (the logic is there before the logic moves to procedure), so it is doing the right thing (based on code logic) if some region is not online. The table still declares it is online. This part is not a problem. [~sreenivasulureddy], my question is that {{master.AssignmentManager: Skip assigning testTable,,1478689618299.1890fa9c085dcc2ee0602f4bab069d10., it is on a dead but not processed yet server: host-1,16040,1478690163056}} - does the ServerShutdownHandler procedure ever run to move the regions of {{host-1}} to other RS? If the SSH did run and skip this region, we probably have a corner case bug here. Please either attach relevant log for host-1 SSH or share out the master log in some places. was (Author: syuanjiang): The title "Disabling table not getting enabled after clean cluster restart." is misleading. The table did get enabled; just the one region of the table is still offline. For EnableTable operation, the logic does not change in all 1.x releases. It is best effort to online region (the logic is there before the logic moves to procedure), so it is doing the right thing (based on code logic) if some region is not online. The table still declares it is online. This part is not a problem. [~sreenivasulureddy], my question is that {{master.AssignmentManager: Skip assigning testTable,,1478689618299.1890fa9c085dcc2ee0602f4bab069d10., it is on a dead but not processed yet server: host-1,16040,1478690163056}} - does the ServerShutdownHandler procedure ever run on {{host-1}}. If the SSH did run and skip this region, we probably have a corner case bug here. Please either attach relevant log for host-1 SSH or share out the master log in some places. > Disabling table not getting enabled after clean cluster restart. > > > Key: HBASE-17055 > URL: https://issues.apache.org/jira/browse/HBASE-17055 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.3.0 >Reporter: Y. SREENIVASULU REDDY > Fix For: 1.3.0 > > > scenario: > 1. Disable the table, while disabling the table is in progress. > 2. Restart whole HBase service. > 3. Then enable the table. > the above operation leads to RIT continously. > pls find the below logs for understanding. > while disabling the table whole hbase service went down. > the following is the master logs > {noformat} > 2016-11-09 19:32:55,102 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 disable testTable > 2016-11-09 19:32:55,257 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure DisableTableProcedure > (table=testTable) id=8 owner=seenu state=RUNNABLE:DISABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:32:55,264 DEBUG [ProcedureExecutor-5] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:165 > 2016-11-09 19:32:55,285 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,386 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,513 INFO [ProcedureExecutor-5] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLING to > DISABLING > 2016-11-09 19:32:55,587 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,628 INFO [ProcedureExecutor-5] > procedure.DisableTableProcedure: Offlining 1 regions. > . > . > . > . > . > . > . > . > 2016-11-09 19:33:02,871 INFO [AM.ZK.Worker-pool2-t7] master.RegionStates: > Offlined 1890fa9c085dcc2ee0602f4bab069d10 from host-1,16040,1478690163056 > Wed Nov 9 19:33:02 CST 2016 Terminating master > {noformat} > here we need to observe > {color:red} Offlined 1890fa9c085dcc2ee0602f4bab069d10 from > host-1,16040,1478690163056 {color} > then hmaster went down, all regionServers also made down. > After hmaster and regionserver are restarted > executed enable Table operation on the table. > {panel:title=HMaster >
[jira] [Commented] (HBASE-17055) Disabling table not getting enabled after clean cluster restart.
[ https://issues.apache.org/jira/browse/HBASE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652370#comment-15652370 ] Stephen Yuan Jiang commented on HBASE-17055: The title "Disabling table not getting enabled after clean cluster restart." is misleading. The table did get enabled; just the one region of the table is still offline. For EnableTable operation, the logic does not change in all 1.x releases. It is best effort to online region (the logic is there before the logic moves to procedure), so it is doing the right thing (based on code logic) if some region is not online. The table still declares it is online. This part is not a problem. [~sreenivasulureddy], my question is that {{master.AssignmentManager: Skip assigning testTable,,1478689618299.1890fa9c085dcc2ee0602f4bab069d10., it is on a dead but not processed yet server: host-1,16040,1478690163056}} - does the ServerShutdownHandler procedure ever run on {{host-1}}. If the SSH did run and skip this region, we probably have a corner case bug here. Please either attach relevant log for host-1 SSH or share out the master log in some places. > Disabling table not getting enabled after clean cluster restart. > > > Key: HBASE-17055 > URL: https://issues.apache.org/jira/browse/HBASE-17055 > Project: HBase > Issue Type: Bug > Components: Region Assignment >Affects Versions: 1.3.0 >Reporter: Y. SREENIVASULU REDDY > Fix For: 1.3.0 > > > scenario: > 1. Disable the table, while disabling the table is in progress. > 2. Restart whole HBase service. > 3. Then enable the table. > the above operation leads to RIT continously. > pls find the below logs for understanding. > while disabling the table whole hbase service went down. > the following is the master logs > {noformat} > 2016-11-09 19:32:55,102 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 disable testTable > 2016-11-09 19:32:55,257 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure DisableTableProcedure > (table=testTable) id=8 owner=seenu state=RUNNABLE:DISABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:32:55,264 DEBUG [ProcedureExecutor-5] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:165 > 2016-11-09 19:32:55,285 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,386 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,513 INFO [ProcedureExecutor-5] > zookeeper.ZKTableStateManager: Moving table testTable state from DISABLING to > DISABLING > 2016-11-09 19:32:55,587 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking to see if procedure is done procId=8 > 2016-11-09 19:32:55,628 INFO [ProcedureExecutor-5] > procedure.DisableTableProcedure: Offlining 1 regions. > . > . > . > . > . > . > . > . > 2016-11-09 19:33:02,871 INFO [AM.ZK.Worker-pool2-t7] master.RegionStates: > Offlined 1890fa9c085dcc2ee0602f4bab069d10 from host-1,16040,1478690163056 > Wed Nov 9 19:33:02 CST 2016 Terminating master > {noformat} > here we need to observe > {color:red} Offlined 1890fa9c085dcc2ee0602f4bab069d10 from > host-1,16040,1478690163056 {color} > then hmaster went down, all regionServers also made down. > After hmaster and regionserver are restarted > executed enable Table operation on the table. > {panel:title=HMaster > Logs|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > {noformat} > 2016-11-09 19:49:57,059 INFO > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] master.HMaster: > Client=seenu//host-1 enable testTable > 2016-11-09 19:49:57,325 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > procedure2.ProcedureExecutor: Procedure EnableTableProcedure > (table=testTable) id=9 owner=seenu state=RUNNABLE:ENABLE_TABLE_PREPARE added > to the store. > 2016-11-09 19:49:57,333 DEBUG [ProcedureExecutor-2] > lock.ZKInterProcessLockBase: Acquired a lock for > /hbase/table-lock/testTable/write-master:168 > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Use SIMPLE authentication for service ClientService, > sasl=false > 2016-11-09 19:49:57,335 DEBUG [hconnection-0x745317ee-shared--pool3-t11] > ipc.RpcClientImpl: Connecting to host-1:16040 > 2016-11-09 19:49:57,347 DEBUG > [RpcServer.FifoWFPBQ.default.handler=49,queue=4,port=16000] > master.MasterRpcServices: Checking
[jira] [Commented] (HBASE-17056) Remove checked in PB generated files
[ https://issues.apache.org/jira/browse/HBASE-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652365#comment-15652365 ] Enis Soztutar commented on HBASE-17056: --- With {{checkStaleness=true}}, protoc will not be invoked if no changes. So only a fresh build will be affected. https://www.xolstice.org/protobuf-maven-plugin/compile-mojo.html#checkStaleness > Remove checked in PB generated files > - > > Key: HBASE-17056 > URL: https://issues.apache.org/jira/browse/HBASE-17056 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar > Fix For: 2.0.0 > > > Now that we have the new PB maven plugin, there is no need to have the PB > files checked in to the repo. The reason we did that was to ease up developer > env setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652337#comment-15652337 ] Vladimir Rodionov commented on HBASE-14123: --- Done. v36. > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v5.txt, > 14123-master.v6.txt, 14123-master.v7.txt, 14123-master.v8.txt, > 14123-master.v9.txt, 14123-v14.txt, HBASE-14123-for-7912-v1.patch, > HBASE-14123-for-7912-v6.patch, HBASE-14123-v1.patch, HBASE-14123-v10.patch, > HBASE-14123-v11.patch, HBASE-14123-v12.patch, HBASE-14123-v13.patch, > HBASE-14123-v15.patch, HBASE-14123-v16.patch, HBASE-14123-v2.patch, > HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, > HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-14123) HBase Backup/Restore Phase 2
[ https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Rodionov updated HBASE-14123: -- Attachment: 14123-master.v36.txt v36. Rebased to master. > HBase Backup/Restore Phase 2 > > > Key: HBASE-14123 > URL: https://issues.apache.org/jira/browse/HBASE-14123 > Project: HBase > Issue Type: Umbrella >Reporter: Vladimir Rodionov >Assignee: Vladimir Rodionov > Attachments: 14123-master.v14.txt, 14123-master.v15.txt, > 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, > 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, > 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, > 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, > 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, > 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, > 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v5.txt, > 14123-master.v6.txt, 14123-master.v7.txt, 14123-master.v8.txt, > 14123-master.v9.txt, 14123-v14.txt, HBASE-14123-for-7912-v1.patch, > HBASE-14123-for-7912-v6.patch, HBASE-14123-v1.patch, HBASE-14123-v10.patch, > HBASE-14123-v11.patch, HBASE-14123-v12.patch, HBASE-14123-v13.patch, > HBASE-14123-v15.patch, HBASE-14123-v16.patch, HBASE-14123-v2.patch, > HBASE-14123-v3.patch, HBASE-14123-v4.patch, HBASE-14123-v5.patch, > HBASE-14123-v6.patch, HBASE-14123-v7.patch, HBASE-14123-v9.patch > > > Phase 2 umbrella JIRA. See HBASE-7912 for design document and description. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16956) Refactor FavoredNodePlan to use regionNames as keys
[ https://issues.apache.org/jira/browse/HBASE-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652311#comment-15652311 ] Hadoop QA commented on HBASE-16956: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 6s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 28m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 100m 14s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 141m 5s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.client.TestMetaWithReplicas | | | org.apache.hadoop.hbase.master.TestTableLockManager | | | org.apache.hadoop.hbase.client.TestFromClientSideWithCoprocessor | | | org.apache.hadoop.hbase.client.TestLeaseRenewal | | | org.apache.hadoop.hbase.client.TestMobCloneSnapshotFromClient | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838222/HBASE-16956.master.006.patch | | JIRA Issue | HBASE-16956 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 03f6e8d5580b 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 287358b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/4405/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/4405/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results |
[jira] [Commented] (HBASE-16700) Allow for coprocessor whitelisting
[ https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652294#comment-15652294 ] Ted Yu commented on HBASE-16700: {code} + */ +public class CoprocessorWhitelistMasterObserver extends BaseMasterObserver { {code} Please add annotation for Audience. {code} + Collection paths = + services.getConfiguration().getStringCollection( + CP_COPROCESSOR_WHITELIST_PATHS_KEY); {code} The above can be lifted outside the for loop. {code} +public class TestCoprocessorWhitelistMasterObserver extends SecureTestUtil { {code} Add test category. {code} + private static final Log LOG = LogFactory.getLog(TestAccessController.class); {code} Change class name to match actual class. > Allow for coprocessor whitelisting > -- > > Key: HBASE-16700 > URL: https://issues.apache.org/jira/browse/HBASE-16700 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Clay B. >Priority: Minor > Labels: security > Attachments: HBASE-16700.000.patch > > > Today one can turn off all non-system coprocessors with > {{hbase.coprocessor.user.enabled}} however, this disables very useful things > like Apache Phoenix's coprocessors. Some tenants of a multi-user HBase may > also need to run bespoke coprocessors. But as an operator I would not want > wanton coprocessor usage. Ideally, one could do one of two things: > * Allow coprocessors defined in {{hbase-site.xml}} -- this can only be > administratively changed in most cases > * Allow coprocessors from table descriptors but only if the coprocessor is > whitelisted -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Status: Patch Available (was: Open) > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652283#comment-15652283 ] Thiruvel Thirumoolan commented on HBASE-16169: -- Thanks, submitting for precommit. > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652278#comment-15652278 ] Ted Yu commented on HBASE-16169: I think this should be fine. > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652270#comment-15652270 ] Thiruvel Thirumoolan commented on HBASE-16169: -- [~tedyu], I have changed the hbase-protocol-shaded files and have used them. Should I also change the hbase-protocol files? Since the client would only use org.apache.hadoop.hbase.RegionLoad, I guess it should be fine? Let me know if my assumptions are wrong. Thanks! > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652270#comment-15652270 ] Thiruvel Thirumoolan edited comment on HBASE-16169 at 11/9/16 10:41 PM: [~tedyu], I have changed the hbase-protocol-shaded files and have used them. Should I also change the hbase-protocol files? Since the client would only use org.apache.hadoop.hbase.RegionLoad, I guess it should be fine? Let me know if my assumptions are wrong. I uploaded the latest patch for reference. Thanks! was (Author: thiruvel): [~tedyu], I have changed the hbase-protocol-shaded files and have used them. Should I also change the hbase-protocol files? Since the client would only use org.apache.hadoop.hbase.RegionLoad, I guess it should be fine? Let me know if my assumptions are wrong. Thanks! > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652268#comment-15652268 ] stack commented on HBASE-16993: --- Add missing new files. Patch does fixup on commentary, adds metadata to the cache persistence file so we can evolve it later (need to get rid of java serialization for one), and then adds test to confirm cleaned up persistence file on error/success. > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at
[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16993: -- Attachment: HBASE-16993.master.002.patch > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch, HBASE-16993.master.002.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6537) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:1935) > at >
[jira] [Updated] (HBASE-16700) Allow for coprocessor whitelisting
[ https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clay B. updated HBASE-16700: Attachment: HBASE-16700.000.patch > Allow for coprocessor whitelisting > -- > > Key: HBASE-16700 > URL: https://issues.apache.org/jira/browse/HBASE-16700 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Clay B. >Priority: Minor > Labels: security > Attachments: HBASE-16700.000.patch > > > Today one can turn off all non-system coprocessors with > {{hbase.coprocessor.user.enabled}} however, this disables very useful things > like Apache Phoenix's coprocessors. Some tenants of a multi-user HBase may > also need to run bespoke coprocessors. But as an operator I would not want > wanton coprocessor usage. Ideally, one could do one of two things: > * Allow coprocessors defined in {{hbase-site.xml}} -- this can only be > administratively changed in most cases > * Allow coprocessors from table descriptors but only if the coprocessor is > whitelisted -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Release Note: Added couple of API's to Admin.java: Returns region load map of all regions hosted on a region server MapgetRegionLoad(ServerName sn) throws IOException; Returns region load map of all regions of a table hosted on a region server Map getRegionLoad(ServerName sn, TableName tableName) throws IOException Added an API to region server: public GetRegionLoadResponse getRegionLoad(RpcController controller, GetRegionLoadRequest request) throws ServiceException; was: Added couple of API's to Admin.java: Returns region load map of all regions hosted on a region server Map getRegionLoad(ServerName sn) throws IOException; Returns region load map of all regions of a table hosted on a region server Map getRegionLoad(ServerName sn, TableName tableName) throws IOException Added an API to region server: > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16700) Allow for coprocessor whitelisting
[ https://issues.apache.org/jira/browse/HBASE-16700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clay B. updated HBASE-16700: Labels: security (was: ) Release Note: Provides ability to restrict table coprocessors based on HDFS path whitelist. (Particularly useful for allowing Phoenix coprocessors but not arbitrary user created coprocessors.) Status: Patch Available (was: Open) > Allow for coprocessor whitelisting > -- > > Key: HBASE-16700 > URL: https://issues.apache.org/jira/browse/HBASE-16700 > Project: HBase > Issue Type: Improvement > Components: Coprocessors >Reporter: Clay B. >Priority: Minor > Labels: security > > Today one can turn off all non-system coprocessors with > {{hbase.coprocessor.user.enabled}} however, this disables very useful things > like Apache Phoenix's coprocessors. Some tenants of a multi-user HBase may > also need to run bespoke coprocessors. But as an operator I would not want > wanton coprocessor usage. Ideally, one could do one of two things: > * Allow coprocessors defined in {{hbase-site.xml}} -- this can only be > administratively changed in most cases > * Allow coprocessors from table descriptors but only if the coprocessor is > whitelisted -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Release Note: Added couple of API's to Admin.java: Returns region load map of all regions hosted on a region server MapgetRegionLoad(ServerName sn) throws IOException; Returns region load map of all regions of a table hosted on a region server Map getRegionLoad(ServerName sn, TableName tableName) throws IOException Added an API to region server: was: Added couple of API's to Admin.java: Returns region load map of all regions hosted on a region server Map getRegionLoad(ServerName sn) throws IOException; Returns region load map of all regions of a table hosted on a region server Map getRegionLoad(ServerName sn, TableName tableName) throws IOException > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable
[ https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16169: - Attachment: HBASE-16169.master.005.patch > Make RegionSizeCalculator scalable > -- > > Key: HBASE-16169 > URL: https://issues.apache.org/jira/browse/HBASE-16169 > Project: HBase > Issue Type: Sub-task > Components: mapreduce, scaling >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16169.master.000.patch, > HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, > HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, > HBASE-16169.master.005.patch > > > RegionSizeCalculator is needed for better split generation of MR jobs. This > requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing > Master. We don't want master to be in this path. > The proposal is to add an API to the RegionServer that gets RegionLoad of all > regions hosted on it or those of a table if specified. RegionSizeCalculator > can use the latter. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16852) TestDefaultCompactSelection failed on branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Antonov updated HBASE-16852: Fix Version/s: (was: 1.3.1) 1.3.0 > TestDefaultCompactSelection failed on branch-1.3 > > > Key: HBASE-16852 > URL: https://issues.apache.org/jira/browse/HBASE-16852 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.3.0 > Environment: asf jenkins >Reporter: Mikhail Antonov > Fix For: 1.3.0 > > > Regression > org.apache.hadoop.hbase.regionserver.TestDefaultCompactSelection.testCompactionRatio > Failing for the past 1 build (Since Failed#48 ) > Took 0.1 sec. > Error Message > expected:<[[50, 25, 12, 12]]> but was:<[[]]> > Stacktrace > org.junit.ComparisonFailure: expected:<[[50, 25, 12, 12]]> but was:<[[]]> > at org.junit.Assert.assertEquals(Assert.java:115) at > org.junit.Assert.assertEquals(Assert.java:144) at > org.apache.hadoop.hbase.regionserver.TestCompactionPolicy.compactEquals(TestCompactionPolicy.java:204) >at > org.apache.hadoop.hbase.regionserver.TestCompactionPolicy.compactEquals(TestCompactionPolicy.java:185) >at > org.apache.hadoop.hbase.regionserver.TestDefaultCompactSelection.testCompactionRatio(TestDefaultCompactSelection.java:95) > https://builds.apache.org/view/All/job/HBase-1.3-JDK8/org.apache.hbase$hbase-server/48/testReport/junit/org.apache.hadoop.hbase.regionserver/TestDefaultCompactSelection/testCompactionRatio/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
[ https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652254#comment-15652254 ] Hadoop QA commented on HBASE-16962: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 0s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 27m 13s {color} | {color:green} Patch does not cause any errors with Hadoop 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 57s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 115m 12s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.hbase.constraint.TestConstraint | | | org.apache.hadoop.hbase.TestNamespace | | | org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDeletes | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838223/HBASE-16956.master.006.patch | | JIRA Issue | HBASE-16962 | | Optional Tests | asflicense javac javadoc unit findbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux d7fdda8ebac1 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 287358b | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HBASE-Build/4406/artifact/patchprocess/patch-unit-hbase-server.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HBASE-Build/4406/artifact/patchprocess/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/4406/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output |
[jira] [Commented] (HBASE-16852) TestDefaultCompactSelection failed on branch-1.3
[ https://issues.apache.org/jira/browse/HBASE-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652252#comment-15652252 ] Mikhail Antonov commented on HBASE-16852: - That breaks often enough to make 1.3 build unstable. breaks on my laptop more often than on ASF jenkins. bringing it to 1.3.0 as blocker. Looking > TestDefaultCompactSelection failed on branch-1.3 > > > Key: HBASE-16852 > URL: https://issues.apache.org/jira/browse/HBASE-16852 > Project: HBase > Issue Type: Bug > Components: test >Affects Versions: 1.3.0 > Environment: asf jenkins >Reporter: Mikhail Antonov > Fix For: 1.3.0 > > > Regression > org.apache.hadoop.hbase.regionserver.TestDefaultCompactSelection.testCompactionRatio > Failing for the past 1 build (Since Failed#48 ) > Took 0.1 sec. > Error Message > expected:<[[50, 25, 12, 12]]> but was:<[[]]> > Stacktrace > org.junit.ComparisonFailure: expected:<[[50, 25, 12, 12]]> but was:<[[]]> > at org.junit.Assert.assertEquals(Assert.java:115) at > org.junit.Assert.assertEquals(Assert.java:144) at > org.apache.hadoop.hbase.regionserver.TestCompactionPolicy.compactEquals(TestCompactionPolicy.java:204) >at > org.apache.hadoop.hbase.regionserver.TestCompactionPolicy.compactEquals(TestCompactionPolicy.java:185) >at > org.apache.hadoop.hbase.regionserver.TestDefaultCompactSelection.testCompactionRatio(TestDefaultCompactSelection.java:95) > https://builds.apache.org/view/All/job/HBase-1.3-JDK8/org.apache.hbase$hbase-server/48/testReport/junit/org.apache.hadoop.hbase.regionserver/TestDefaultCompactSelection/testCompactionRatio/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652250#comment-15652250 ] Hadoop QA commented on HBASE-16993: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s {color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 59s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s {color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s {color} | {color:green} master passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 17s {color} | {color:red} The patch causes 24 errors with Hadoop v2.6.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 30s {color} | {color:red} The patch causes 24 errors with Hadoop v2.6.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 43s {color} | {color:red} The patch causes 24 errors with Hadoop v2.6.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 59s {color} | {color:red} The patch causes 24 errors with Hadoop v2.6.4. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 11s {color} | {color:red} The patch causes 24 errors with Hadoop v2.6.5. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 25s {color} | {color:red} The patch causes 24 errors with Hadoop v2.7.1. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 42s {color} | {color:red} The patch causes 24 errors with Hadoop v2.7.2. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 54s {color} | {color:red} The patch causes 24 errors with Hadoop v2.7.3. {color} | | {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 11m 5s {color} | {color:red} The patch causes 24 errors with Hadoop v3.0.0-alpha1. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 27s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 33s {color} | {color:red} hbase-server generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s {color} | {color:red} hbase-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 8s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 53s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:7bda515 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12838233/HBASE-16993.master.001.patch | | JIRA
[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16993: -- Release Note: Make it so bucket sizes no longer have to be exact multiple of 256 (side effect is we now can support caches > 256TB -- smile). Incompatible change as the bucket entry format has changed. Means we cannot read persisted cache written in the old format. On restart, if present, the old persisted cache will be removed and we continue w/ startup; i.e. cache will not be unpopulated after startup (This behavior is 'standard' when we are unable to find or read the the persisted file -- so no change here). Persisted file works in release 1.2.4 and 1.1.7. Previous to HBASE-16460, persisted file operation didn't work was: Make it so bucket sizes no longer have to be exact multiple of 256 (side effect is we now can support caches > 256TB -- smile). Incompatible change as the bucket entry format has changed. Means we cannot read persisted cache written in the old format. On restart, if present, the old persisted cache will be removed and we continue w/ startup; i.e. cache will not be unpopulated after startup (This behavior is 'standard' when we are unable to find or read the the persisted file -- so no change here). Persisted file works in release 1.2.4 and 1.1.7. Previous to this > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at
[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16993: -- Release Note: Make it so bucket sizes no longer have to be exact multiple of 256 (side effect is we now can support caches > 256TB -- smile). Incompatible change as the bucket entry format has changed. Means we cannot read persisted cache written in the old format. On restart, if present, the old persisted cache will be removed and we continue w/ startup; i.e. cache will not be unpopulated after startup (This behavior is 'standard' when we are unable to find or read the the persisted file -- so no change here). Persisted file works in release 1.2.4 and 1.1.7. Previous to this was: Make it so bucket sizes no longer have to be exact multiple of 256 (side effect is we now can support caches > 256TB -- smile). Incompatible change as the bucket entry format has changed. Means we cannot read persisted cache written in the old format. On restart, if present, we remove the old persisted file and continue w/ startup; i.e. cache will be unpopulated after startup. > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at >
[jira] [Updated] (HBASE-16993) BucketCache throw java.io.IOException: Invalid HFile block magic when DATA_BLOCK_ENCODING set to DIFF
[ https://issues.apache.org/jira/browse/HBASE-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-16993: -- Attachment: HBASE-16993.master.001.patch > BucketCache throw java.io.IOException: Invalid HFile block magic when > DATA_BLOCK_ENCODING set to DIFF > - > > Key: HBASE-16993 > URL: https://issues.apache.org/jira/browse/HBASE-16993 > Project: HBase > Issue Type: Bug > Components: BucketCache, io >Affects Versions: 1.1.3 > Environment: hbase version 1.1.3 >Reporter: liubangchen >Assignee: liubangchen > Fix For: 2.0.0 > > Attachments: HBASE-16993.000.patch, HBASE-16993.001.patch, > HBASE-16993.master.001.patch > > Original Estimate: 336h > Remaining Estimate: 336h > > hbase-site.xml setting > > hbase.bucketcache.bucket.sizes > 16384,32768,40960, > 46000,49152,51200,65536,131072,524288 > > > hbase.bucketcache.size > 16384 > > > hbase.bucketcache.ioengine > offheap > > > hfile.block.cache.size > 0.3 > > > hfile.block.bloom.cacheonwrite > true > > > hbase.rs.cacheblocksonwrite > true > > > hfile.block.index.cacheonwrite > true > n_splits = 200 > create 'usertable',{NAME =>'family', COMPRESSION => 'snappy', VERSIONS => > 1,DATA_BLOCK_ENCODING => 'DIFF',CONFIGURATION => > {'hbase.hregion.memstore.block.multiplier' => 5}},{DURABILITY => > 'SKIP_WAL'},{SPLITS => (1..n_splits).map {|i| > "user#{1000+i*(-1000)/n_splits}"}} > load data > bin/ycsb load hbase10 -P workloads/workloada -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > recordcount=2 -p insertorder=hashed -p insertstart=0 -p > clientbuffering=true -p durability=SKIP_WAL -threads 20 -s > run > bin/ycsb run hbase10 -P workloads/workloadb -p table=usertable -p > columnfamily=family -p fieldcount=10 -p fieldlength=100 -p > operationcount=2000 -p readallfields=true -p clientbuffering=true -p > requestdistribution=zipfian -threads 10 -s > log info > 2016-11-02 20:20:20,261 ERROR > [RW.default.readRpcServer.handler=36,queue=21,port=6020] bucket.BucketCache: > Failed reading block fdcc7ed6f3b2498b9ef316cc8206c233_44819759 from bucket > cache > java.io.IOException: Invalid HFile block magic: > \x00\x00\x00\x00\x00\x00\x00\x00 > at > org.apache.hadoop.hbase.io.hfile.BlockType.parse(BlockType.java:154) > at org.apache.hadoop.hbase.io.hfile.BlockType.read(BlockType.java:167) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock.(HFileBlock.java:273) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:134) > at > org.apache.hadoop.hbase.io.hfile.HFileBlock$1.deserialize(HFileBlock.java:121) > at > org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getBlock(BucketCache.java:427) > at > org.apache.hadoop.hbase.io.hfile.CombinedBlockCache.getBlock(CombinedBlockCache.java:85) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.getCachedBlock(HFileReaderV2.java:266) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:403) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:269) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:634) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:584) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:247) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:156) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:363) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:217) > at > org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2071) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5369) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2546) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2532) > at > org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2514) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6558) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:6537) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:1935) > at >
[jira] [Comment Edited] (HBASE-16498) NPE when Scan's stopRow is set NULL
[ https://issues.apache.org/jira/browse/HBASE-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652078#comment-15652078 ] Ashu Pachauri edited comment on HBASE-16498 at 11/9/16 9:26 PM: As pointed out in HBASE-17031, this happens also when setting startRow to null, and we fail there way down in the call stack which is not an easily debuggable scenario. Adding to the javadoc does not give us anything, because almost no one sets start/stop row to null deliberately, it mostly happens during an automated process. Shouldn't we just do a null check whenever we construct a new scanner? was (Author: ashu210890): As pointed out in HBASE-17031, this happens also when setting startRow to null, and we fail there way down in the call stack which is not an easily debuggable scenario. Adding to the javadoc does not given anything, because almost no one sets start/stop row to null deliberately, it mostly happens during an automated process. Shouldn't we just do a null check whenever we construct a new scanner? > NPE when Scan's stopRow is set NULL > --- > > Key: HBASE-16498 > URL: https://issues.apache.org/jira/browse/HBASE-16498 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > Attachments: HBASE-16498-V2.patch, HBASE-16498-V3.patch, > HBASE-16498.patch > > > During scan operation we validate whether this is the last region of table, > if not then records will be retrieved from nextscanner. If stop row is set > null then NPE will be thrown while validating stop row with region endkey. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.hbase.client.ClientScanner.checkScanStopRow(ClientScanner.java:217) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:266) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:237) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:537) > at > org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:363) > at > org.apache.hadoop.hbase.client.ClientSimpleScanner.next(ClientSimpleScanner.java:50) > at > org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:70) > at > org.apache.hadoop.hbase.client.TestAdmin2.testScanWithSplitKeysAndNullStartEndRow(TestAdmin2.java:803) > {noformat} > We should return empty byte array when start/end row is set NULL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16498) NPE when Scan's stopRow is set NULL
[ https://issues.apache.org/jira/browse/HBASE-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652078#comment-15652078 ] Ashu Pachauri commented on HBASE-16498: --- As pointed out in HBASE-17031, this happens also when setting startRow to null, and we fail there way down in the call stack which is not an easily debuggable scenario. Adding to the javadoc does not given anything, because almost no one sets start/stop row to null deliberately, it mostly happens during an automated process. Shouldn't we just do a null check whenever we construct a new scanner? > NPE when Scan's stopRow is set NULL > --- > > Key: HBASE-16498 > URL: https://issues.apache.org/jira/browse/HBASE-16498 > Project: HBase > Issue Type: Bug > Components: Client >Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.6, 1.2.3 >Reporter: Pankaj Kumar >Assignee: Pankaj Kumar >Priority: Minor > Attachments: HBASE-16498-V2.patch, HBASE-16498-V3.patch, > HBASE-16498.patch > > > During scan operation we validate whether this is the last region of table, > if not then records will be retrieved from nextscanner. If stop row is set > null then NPE will be thrown while validating stop row with region endkey. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.hbase.client.ClientScanner.checkScanStopRow(ClientScanner.java:217) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:266) > at > org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:237) > at > org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:537) > at > org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:363) > at > org.apache.hadoop.hbase.client.ClientSimpleScanner.next(ClientSimpleScanner.java:50) > at > org.apache.hadoop.hbase.client.AbstractClientScanner.next(AbstractClientScanner.java:70) > at > org.apache.hadoop.hbase.client.TestAdmin2.testScanWithSplitKeysAndNullStartEndRow(TestAdmin2.java:803) > {noformat} > We should return empty byte array when start/end row is set NULL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HBASE-17031) Scanners should check for null start and end rows
[ https://issues.apache.org/jira/browse/HBASE-17031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashu Pachauri resolved HBASE-17031. --- Resolution: Duplicate > Scanners should check for null start and end rows > - > > Key: HBASE-17031 > URL: https://issues.apache.org/jira/browse/HBASE-17031 > Project: HBase > Issue Type: Bug > Components: Scanners >Reporter: Ashu Pachauri >Priority: Minor > > If a scan is passed with a null start row, it fails very deep in the call > stack. We should validate start and end rows for not null before launching > the scan. > Here is the associated jstack: > {code} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301) > at > org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166) > at > org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161) > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:798) > Caused by: java.lang.NullPointerException > at org.apache.hadoop.hbase.util.Bytes.compareTo(Bytes.java:1225) > at > org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator.compare(Bytes.java:158) > at > org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator.compare(Bytes.java:147) > at > org.apache.hadoop.hbase.types.CopyOnWriteArrayMap$ArrayHolder.find(CopyOnWriteArrayMap.java:892) > at > org.apache.hadoop.hbase.types.CopyOnWriteArrayMap.floorEntry(CopyOnWriteArrayMap.java:169) > at > org.apache.hadoop.hbase.client.MetaCache.getCachedLocation(MetaCache.java:79) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getCachedLocation(ConnectionManager.java:1391) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1231) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1183) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:211) > ... 30 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-17031) Scanners should check for null start and end rows
[ https://issues.apache.org/jira/browse/HBASE-17031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15652044#comment-15652044 ] Ashu Pachauri commented on HBASE-17031: --- [~ashish singhi] Thanks for pointing it out. I'll mark it as a duplicate and put my comments on HBASE-16498. > Scanners should check for null start and end rows > - > > Key: HBASE-17031 > URL: https://issues.apache.org/jira/browse/HBASE-17031 > Project: HBase > Issue Type: Bug > Components: Scanners >Reporter: Ashu Pachauri >Priority: Minor > > If a scan is passed with a null start row, it fails very deep in the call > stack. We should validate start and end rows for not null before launching > the scan. > Here is the associated jstack: > {code} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219) > at > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326) > at > org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301) > at > org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166) > at > org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161) > at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:798) > Caused by: java.lang.NullPointerException > at org.apache.hadoop.hbase.util.Bytes.compareTo(Bytes.java:1225) > at > org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator.compare(Bytes.java:158) > at > org.apache.hadoop.hbase.util.Bytes$ByteArrayComparator.compare(Bytes.java:147) > at > org.apache.hadoop.hbase.types.CopyOnWriteArrayMap$ArrayHolder.find(CopyOnWriteArrayMap.java:892) > at > org.apache.hadoop.hbase.types.CopyOnWriteArrayMap.floorEntry(CopyOnWriteArrayMap.java:169) > at > org.apache.hadoop.hbase.client.MetaCache.getCachedLocation(MetaCache.java:79) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getCachedLocation(ConnectionManager.java:1391) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1231) > at > org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1183) > at > org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156) > at > org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:211) > ... 30 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
[ https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16524: Description: Fix performance regression introduced by HBASE-16094. Instead of scanning all the wals every time, we can rely on the insert/update/delete events we have. and since we want to delete the wals in order we can keep track of what is "holding" that wal, and take a hit on scanning all the trackers only when we remove the first log in the queue. e.g. WAL-1 [1, 2] WAL-2 [1] -> "[2] is holding WAL-1" WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2" was:Fix performance regression introduced by HBASE-16094. > Procedure v2 - Compute WALs cleanup on wal modification and not on every sync > - > > Key: HBASE-16524 > URL: https://issues.apache.org/jira/browse/HBASE-16524 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Appy >Assignee: Matteo Bertozzi >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16524-v2.patch, HBASE-16524.master.001.patch, > flame1.svg > > > Fix performance regression introduced by HBASE-16094. > Instead of scanning all the wals every time, we can rely on the > insert/update/delete events we have. > and since we want to delete the wals in order we can keep track of what is > "holding" that wal, and take a hit on scanning all the trackers only when we > remove the first log in the queue. > e.g. > WAL-1 [1, 2] > WAL-2 [1] -> "[2] is holding WAL-1" > WAL-3 [2] -> "WAL 1 can be removed, recompute what is holding WAL-2" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16524) Procedure v2 - Compute WALs cleanup on wal modification and not on every sync
[ https://issues.apache.org/jira/browse/HBASE-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-16524: Description: Fix performance regression introduced by HBASE-16094. > Procedure v2 - Compute WALs cleanup on wal modification and not on every sync > - > > Key: HBASE-16524 > URL: https://issues.apache.org/jira/browse/HBASE-16524 > Project: HBase > Issue Type: Sub-task > Components: proc-v2 >Affects Versions: 2.0.0 >Reporter: Appy >Assignee: Matteo Bertozzi >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16524-v2.patch, HBASE-16524.master.001.patch, > flame1.svg > > > Fix performance regression introduced by HBASE-16094. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
[ https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16962: - Status: Patch Available (was: Open) Rebased after HBASE-17054 was merged and uploaded branch-1 and master patches. Also updated the reviewboard links for both branches. > Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API > -- > > Key: HBASE-16962 > URL: https://issues.apache.org/jira/browse/HBASE-16962 > Project: HBase > Issue Type: Bug >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16956.master.006.patch, HBASE-16962.master.001.patch, > HBASE-16962.master.002.patch, HBASE-16962.master.003.patch, > HBASE-16962.master.004.patch, HBASE-16962.rough.patch > > > Similar to HBASE-15759, I would like to add readPoint to the > preCompactScannerOpen() API. > I have a CP where I create a StoreScanner() as part of the > preCompactScannerOpen() API. I need the readpoint which was obtained in > Compactor.compact() method to be consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16956) Refactor FavoredNodePlan to use regionNames as keys
[ https://issues.apache.org/jira/browse/HBASE-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16956: - Attachment: HBASE-16956.master.006.patch > Refactor FavoredNodePlan to use regionNames as keys > --- > > Key: HBASE-16956 > URL: https://issues.apache.org/jira/browse/HBASE-16956 > Project: HBase > Issue Type: Sub-task >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan >Priority: Minor > Fix For: 2.0.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16956.master.001.patch, HBASE-16956.master.002.patch, > HBASE-16956.master.003.patch, HBASE-16956.master.004.patch, > HBASE-16956.master.005.patch, HBASE-16956.master.006.patch > > > We would like to rely on the FNPlan cache whether a region is offline or not. > Sticking to regionNames as keys makes that possible. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
[ https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16962: - Attachment: HBASE-16956.master.006.patch > Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API > -- > > Key: HBASE-16962 > URL: https://issues.apache.org/jira/browse/HBASE-16962 > Project: HBase > Issue Type: Bug >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16956.master.006.patch, HBASE-16962.master.001.patch, > HBASE-16962.master.002.patch, HBASE-16962.master.003.patch, > HBASE-16962.master.004.patch, HBASE-16962.rough.patch > > > Similar to HBASE-15759, I would like to add readPoint to the > preCompactScannerOpen() API. > I have a CP where I create a StoreScanner() as part of the > preCompactScannerOpen() API. I need the readpoint which was obtained in > Compactor.compact() method to be consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
[ https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thiruvel Thirumoolan updated HBASE-16962: - Attachment: HBASE-16956.branch-1.001.patch > Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API > -- > > Key: HBASE-16962 > URL: https://issues.apache.org/jira/browse/HBASE-16962 > Project: HBase > Issue Type: Bug >Reporter: Thiruvel Thirumoolan >Assignee: Thiruvel Thirumoolan > Fix For: 2.0.0, 1.4.0 > > Attachments: HBASE-16956.branch-1.001.patch, > HBASE-16962.master.001.patch, HBASE-16962.master.002.patch, > HBASE-16962.master.003.patch, HBASE-16962.master.004.patch, > HBASE-16962.rough.patch > > > Similar to HBASE-15759, I would like to add readPoint to the > preCompactScannerOpen() API. > I have a CP where I create a StoreScanner() as part of the > preCompactScannerOpen() API. I need the readpoint which was obtained in > Compactor.compact() method to be consistent. -- This message was sent by Atlassian JIRA (v6.3.4#6332)