[jira] [Commented] (HBASE-19731) TestFromClientSide#testCheckAndDeleteWithCompareOp and testNullQualifier are flakey
[ https://issues.apache.org/jira/browse/HBASE-19731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315817#comment-16315817 ] Duo Zhang commented on HBASE-19731: --- [~stack] I think the problem is we assigned the same timestamp twice. I added a static tsAssigned field in HRegion {code} public static volatile List tsAssigned; {code} And in MutationBatchOperation.prepareMiniBatchOperations, I did this {code} if (!region.getRegionInfo().isMetaRegion() && HRegion.tsAssigned != null) { HRegion.tsAssigned.add(timestamp); } {code} And I also modified the UT {code} @Test public void test() throws IOException { try { for (int i = 0; i < 100; i++) { testCheckAndDeleteWithCompareOp(); TEST_UTIL.deleteTable(TableName.valueOf(name.getMethodName())); HRegion.tsAssigned = null; } } catch (AssertionError e) { HRegion.tsAssigned.forEach(System.out::println); throw e; } } {code} Notice that I will create HRegion.tsAssigned in testCheckAndDeleteWithCompareOp after the creation of the test table. And finally I got this output {noformat} 1515397552529 1515397552533 1515397552535 1515397552537 1515397552539 1515397552541 1515397552543 1515397552546 1515397552547 1515397552548 1515397552549 1515397552550 1515397552551 1515397552554 1515397552555 1515397552556 1515397552556 {noformat} You can see that the test fails immediately after we issue the same ts again. This means we are doing faster mutation for beta1 so it is more easier to run into this situation? Maybe a good news... > TestFromClientSide#testCheckAndDeleteWithCompareOp and testNullQualifier are > flakey > --- > > Key: HBASE-19731 > URL: https://issues.apache.org/jira/browse/HBASE-19731 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > > These two tests fail frequently locally; rare does this suite pass. > The failures are either of these two tests. Unfortunately, running the test > standalone does not bring on the issue; need to run the whole suite. > In both cases, we have a Delete followed by a Put and then a checkAnd* -type > operation which does a Get expecting to find the just put Put but it fails on > occasion. > Looks to be an mvcc issues or Put going in at same timestamp as the Delete. > Its hard to debug given any added logging seems to make it all pass again. > Seems this too is new in beta-1. Running tests against alpha-4 seem to pass. > Doing a compare -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19139) Create Async Admin methods for Clear Block Cache
[ https://issues.apache.org/jira/browse/HBASE-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-19139: --- Status: Patch Available (was: Open) > Create Async Admin methods for Clear Block Cache > > > Key: HBASE-19139 > URL: https://issues.apache.org/jira/browse/HBASE-19139 > Project: HBase > Issue Type: Improvement > Components: Admin >Reporter: Zach York >Assignee: Guanghao Zhang > Attachments: HBASE-19139.master.001.patch > > > As part of the review for HBASE-18624, reviewers suggested to add the > clear_block_cache to the AsyncAdmin as well. Since the issue was very large, > we decided to split this into a follow-up JIRA. The purpose of this JIRA will > be to finish the work on the AsyncAdmin. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns
[ https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315800#comment-16315800 ] Hudson commented on HBASE-19696: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4361 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4361/]) HBASE-19696 Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining (tedyu: rev 5a66eb978c7ab865dad70ce70690c0b6ca519d2a) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFilter.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/ColumnTracker.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/UserScanQueryMatcher.java * (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/querymatcher/ExplicitColumnTracker.java > Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when > scan has explicit columns > > > Key: HBASE-19696 > URL: https://issues.apache.org/jira/browse/HBASE-19696 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19696.patch, HBASE-19696_v1.patch, > HBASE-19696_v2.patch > > > INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell > if the scan has explicit columns. > This is because we use a column hint from a column tracker to prepare a cell > for seeking to next column but we are not updating column tracker with next > column when filter returns INCLUDE_AND_NEXT_COL. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19712) Fix TestSnapshotQuotaObserverChore#testSnapshotSize
[ https://issues.apache.org/jira/browse/HBASE-19712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315801#comment-16315801 ] Hudson commented on HBASE-19712: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4361 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4361/]) HBASE-19712 Fix TestSnapshotQuotaObserverChore#testSnapshotSize (chia7712: rev 7378dad5a944d6660d20e4bca07fac52333a0ab0) * (edit) hbase-server/src/test/java/org/apache/hadoop/hbase/quotas/TestSnapshotQuotaObserverChore.java > Fix TestSnapshotQuotaObserverChore#testSnapshotSize > --- > > Key: HBASE-19712 > URL: https://issues.apache.org/jira/browse/HBASE-19712 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19712.v0.patch, HBASE-19712.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19674) make_patch.sh version increment fails
[ https://issues.apache.org/jira/browse/HBASE-19674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315799#comment-16315799 ] Hudson commented on HBASE-19674: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4361 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4361/]) HBASE-19674: Improve make_patch.sh (jan.hentschel: rev 8ae2a2150b517123cde214bb5543557b562c4c01) * (edit) dev-support/make_patch.sh > make_patch.sh version increment fails > - > > Key: HBASE-19674 > URL: https://issues.apache.org/jira/browse/HBASE-19674 > Project: HBase > Issue Type: Improvement >Reporter: Niels Basjes >Assignee: Niels Basjes > Fix For: 3.0.0 > > Attachments: HBASE-19674.20171230-131310.patch, > HBASE-19674.20171230-152443.patch, HBASE-19674.20180103-160831.patch > > > I have 5 things in the {{make_patch.sh}} script where I see room for > improvement: > 1) BUG: > Assume my working branch is called {{HBASE-19673}} > Now if I run > {{dev-support/make_patch.sh -b origin/branch-1}} > a patch is created with the name > {{~/patches/HBASE-19673.v1.branch-1.patch}} > When I run the same command again the version is not incremented. > The reason is that the script checks for {{HBASE-19673.v1.patch}} which is > without the branch name. > 2) Messy: The first patch created does NOT include the version tag at all. > 3) Messy: The version starts with '1' so when we reach patch '10' they will > be ordered incorrectly in Jira (which is based on name) > 4) New feature: I personally prefer using the timestamp as the 'version' of > the patch because these are much easier to order. > 5) Messy: If you for example only have one file {{HBASE-19674.v05.patch}} > then the next file generated will be {{HBASE-19674.v01.patch}} instead of the > expected {{HBASE-19674.v06.patch}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19731) TestFromClientSide#testCheckAndDeleteWithCompareOp and testNullQualifier are flakey
[ https://issues.apache.org/jira/browse/HBASE-19731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315797#comment-16315797 ] Duo Zhang commented on HBASE-19731: --- I wrote a method to loop testCheckAndDeleteWithCompareOp and can make the test fail. {code} @Test public void test() throws IOException { for (int i = 0; i < 100; i++) { testCheckAndDeleteWithCompareOp(); TEST_UTIL.deleteTable(TableName.valueOf(name.getMethodName())); } } {code} Let me dig more. > TestFromClientSide#testCheckAndDeleteWithCompareOp and testNullQualifier are > flakey > --- > > Key: HBASE-19731 > URL: https://issues.apache.org/jira/browse/HBASE-19731 > Project: HBase > Issue Type: Sub-task > Components: test >Reporter: stack >Assignee: stack >Priority: Critical > Fix For: 2.0.0-beta-2 > > > These two tests fail frequently locally; rare does this suite pass. > The failures are either of these two tests. Unfortunately, running the test > standalone does not bring on the issue; need to run the whole suite. > In both cases, we have a Delete followed by a Put and then a checkAnd* -type > operation which does a Get expecting to find the just put Put but it fails on > occasion. > Looks to be an mvcc issues or Put going in at same timestamp as the Delete. > Its hard to debug given any added logging seems to make it all pass again. > Seems this too is new in beta-1. Running tests against alpha-4 seem to pass. > Doing a compare -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19397) Design procedures for ReplicationManager to notify peer change event from master
[ https://issues.apache.org/jira/browse/HBASE-19397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19397: -- Attachment: HBASE-19397-master-v2.patch Rebased. > Design procedures for ReplicationManager to notify peer change event from > master > - > > Key: HBASE-19397 > URL: https://issues.apache.org/jira/browse/HBASE-19397 > Project: HBase > Issue Type: New Feature > Components: proc-v2, Replication >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19397-branch-2.patch, HBASE-19397-master-v1.patch, > HBASE-19397-master-v1.patch, HBASE-19397-master-v2.patch, > HBASE-19397-master.patch > > > After we store peer states / peer queues information into hbase table, RS > can not track peer config change by adding watcher znode. > So we need design procedures for ReplicationManager to notify peer change > event. the replication rpc interfaces which may be implemented by > procedures are following: > {code} > 1. addReplicationPeer > 2. removeReplicationPeer > 3. enableReplicationPeer > 4. disableReplicationPeer > 5. updateReplicationPeerConfig > {code} > BTW, our RS states will still be store in zookeeper, so when RS crash, the > tracker which will trigger to transfer queues of crashed RS will still be a > Zookeeper Tracker. we need NOT implement that by procedures. > As we will release 2.0 in next weeks, and the HBASE-15867 can not be > resolved before the release, so I'd prefer to create a new feature branch > for HBASE-15867. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19729) UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315768#comment-16315768 ] Guanghao Zhang commented on HBASE-19729: The javadoc what I add is not clearly... For all matchCode (INCLUDE) and filterResponse (INCLUDE*), we need check versions again. > UserScanQueryMatcher#mergeFilterResponse should return > INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW > - > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19730) Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking
[ https://issues.apache.org/jira/browse/HBASE-19730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315765#comment-16315765 ] Hadoop QA commented on HBASE-19730: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 3s{color} | {color:red} HBASE-19730 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HBASE-19730 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12905017/19730-branch-1.2.patch | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10925/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt > checking > --- > > Key: HBASE-19730 > URL: https://issues.apache.org/jira/browse/HBASE-19730 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 1.2.7 > > Attachments: 19730-branch-1.2.patch > > > HBASE-14497 fixed StackOverflowError involving reverse scan. > This issue is to backport the fix to branch-1.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19729) UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315763#comment-16315763 ] Guanghao Zhang commented on HBASE-19729: bq. (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) For this case, we still need check versions again. > UserScanQueryMatcher#mergeFilterResponse should return > INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW > - > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19729) UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315749#comment-16315749 ] Zheng Hu commented on HBASE-19729: -- Yes, it's a bug. I've changed the tittle of this issue. > UserScanQueryMatcher#mergeFilterResponse should return > INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW > - > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19729) UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19729: - Summary: UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW (was: UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is ) > UserScanQueryMatcher#mergeFilterResponse should return > INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is INCLUDE_AND_SEEK_NEXT_ROW > - > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19729) UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19729: - Summary: UserScanQueryMatcher#mergeFilterResponse should return INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is (was: Optimize UserScanQueryMatcher#mergeFilterResponse) > UserScanQueryMatcher#mergeFilterResponse should return > INCLUDE_AND_SEEK_NEXT_ROW when filterResponse is > > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19139) Create Async Admin methods for Clear Block Cache
[ https://issues.apache.org/jira/browse/HBASE-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-19139: --- Attachment: HBASE-19139.master.001.patch > Create Async Admin methods for Clear Block Cache > > > Key: HBASE-19139 > URL: https://issues.apache.org/jira/browse/HBASE-19139 > Project: HBase > Issue Type: Improvement > Components: Admin >Reporter: Zach York >Assignee: Guanghao Zhang > Attachments: HBASE-19139.master.001.patch > > > As part of the review for HBASE-18624, reviewers suggested to add the > clear_block_cache to the AsyncAdmin as well. Since the issue was very large, > we decided to split this into a follow-up JIRA. The purpose of this JIRA will > be to finish the work on the AsyncAdmin. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-19139) Create Async Admin methods for Clear Block Cache
[ https://issues.apache.org/jira/browse/HBASE-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang reassigned HBASE-19139: -- Assignee: Guanghao Zhang (was: Zach York) > Create Async Admin methods for Clear Block Cache > > > Key: HBASE-19139 > URL: https://issues.apache.org/jira/browse/HBASE-19139 > Project: HBase > Issue Type: Improvement > Components: Admin >Reporter: Zach York >Assignee: Guanghao Zhang > > As part of the review for HBASE-18624, reviewers suggested to add the > clear_block_cache to the AsyncAdmin as well. Since the issue was very large, > we decided to split this into a follow-up JIRA. The purpose of this JIRA will > be to finish the work on the AsyncAdmin. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315735#comment-16315735 ] Jingyun Tian commented on HBASE-19358: -- Reattached imgs. [~carp84] thx for your generous help. :) > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279809#comment-16279809 ] Jingyun Tian edited comment on HBASE-19358 at 1/8/18 6:29 AM: -- [~carp84] here is my test result: Split one 512MB HLog on a single regionserver !https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png! we can see in most situation new logic has a better performance than the old one. The motivation I do this improvement is when a cluster has to restart, if there are too many regions per region, the restart is prone to failure and we have to split one hlog each time to avoid errors. So I test when restart the whole cluster, how many throughput it can reach with different thread count. Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes !https://issues.apache.org/jira/secure/attachment/12905030/split_test_result.png! blue series represent the throughput of the cluster has 2 regions and regions per rs, while red series has 4 regions, regions per rs and orange series has 8 regions and per rs. This is the table if the chart is not clear: !https://issues.apache.org/jira/secure/attachment/12905026/split-table.png! Depend on this chart, I think the time cost when you restart the whole cluster is not related to the thread count. More regions this Hlog contains, more time it will cost to split. was (Author: tianjingyun): [~carp84] here is my test result: Split one 512MB HLog on a single regionserver !https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png! we can see in most situation new logic has a better performance than the old one. The motivation I do this improvement is when a cluster has to restart, if there are too many regions per region, the restart is prone to failure and we have to split one hlog each time to avoid errors. So I test when restart the whole cluster, how many throughput it can reach with different thread count. Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes !https://issues.apache.org/jira/secure/attachment/12905030/split_test_result.png! blue series represent the throughput of the cluster has 2 regions and regions per rs, while red series has 4 regions, regions per rs and orange series has 8 regions and per rs. This is the table if the chart is not clear: !https://issues.apache.org/jira/secure/attachment/12904508/split-table.png! Depend on this chart, I think the time cost when you restart the whole cluster is not related to the thread count. More regions this Hlog contains, more time it will cost to split. > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * >
[jira] [Commented] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign
[ https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315729#comment-16315729 ] Duo Zhang commented on HBASE-19726: --- OK, I think I find the problem. It is caused by the problematic zk. The update is succeeded, but when we call TableStateManager.setTableState, the ConnectionImplementation can not get the correct meta znode from zk and lead to the exception thrown. And then an infinite retrying. Anyway, I think there are two problems. First, do we need to write the state of hbase:meta to hbase:meta? Seems not. Second, if we need to write the state to hbase:meta, then it could fail since this is an ipc call, the retry does not work if we fail here. What do you think sir? [~stack] Thanks. > Failed to start HMaster due to infinite retrying on meta assign > --- > > Key: HBASE-19726 > URL: https://issues.apache.org/jira/browse/HBASE-19726 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang > > This is what I got at first, an exception when trying to write something to > meta when meta has not been onlined yet. > {noformat} > 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running > RecoverMetaProcedure to ensure proper hbase:meta deploy. > 2018-01-07,21:03:14,637 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, > state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true > 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896 > belongs to an existing region server > 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232 > belongs to an existing region server > 2018-01-07,21:03:14,648 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, > state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null > 2018-01-07,21:03:14,653 INFO > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized > subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; > AssignProcedure table=hbase:meta, region=1588230740}] > 2018-01-07,21:03:14,660 INFO > org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740 > 2018-01-07,21:03:14,663 INFO > org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; > forceNewPlan=false, retain=false > 2018-01-07,21:03:14,831 INFO > org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta > (replicaId=0) location in ZooKeeper as > c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,841 INFO > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch > pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure > table=hbase:meta, region=1588230740; rit=OPENING, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,992 INFO > org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using > procedure batch rpc execution for > serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728 > 2018-01-07,21:03:15,593 ERROR > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 > location for > {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514} > 2018-01-07,21:03:15,594 WARN > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: > Retryable error trying to transition: pid=2, ppid=1, > state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, > region=1588230740; rit=OPEN, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: IOException: 1 time, servers with issues: null > at > org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250) > at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457) > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570) > at >
[jira] [Comment Edited] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279809#comment-16279809 ] Jingyun Tian edited comment on HBASE-19358 at 1/8/18 6:28 AM: -- [~carp84] here is my test result: Split one 512MB HLog on a single regionserver !https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png! we can see in most situation new logic has a better performance than the old one. The motivation I do this improvement is when a cluster has to restart, if there are too many regions per region, the restart is prone to failure and we have to split one hlog each time to avoid errors. So I test when restart the whole cluster, how many throughput it can reach with different thread count. Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes !https://issues.apache.org/jira/secure/attachment/12905030/split_test_result.png! blue series represent the throughput of the cluster has 2 regions and regions per rs, while red series has 4 regions, regions per rs and orange series has 8 regions and per rs. This is the table if the chart is not clear: !https://issues.apache.org/jira/secure/attachment/12904508/split-table.png! Depend on this chart, I think the time cost when you restart the whole cluster is not related to the thread count. More regions this Hlog contains, more time it will cost to split. was (Author: tianjingyun): [~carp84] here is my test result: Split one 512MB HLog on a single regionserver !https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png! we can see in most situation new logic has a better performance than the old one. The motivation I do this improvement is when a cluster has to restart, if there are too many regions per region, the restart is prone to failure and we have to split one hlog each time to avoid errors. So I test when restart the whole cluster, how many throughput it can reach with different thread count. Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes !https://issues.apache.org/jira/secure/attachment/12904504/split_test_result.png! blue series represent the throughput of the cluster has 2 regions and regions per rs, while red series has 4 regions, regions per rs and orange series has 8 regions and per rs. This is the table if the chart is not clear: !https://issues.apache.org/jira/secure/attachment/12904508/split-table.png! Depend on this chart, I think the time cost when you restart the whole cluster is not related to the thread count. More regions this Hlog contains, more time it will cost to split. > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * >
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Description: The way we splitting log now is like the following figure: !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! The problem is the OutputSink will write the recovered edits during splitting log, which means it will create one WriterAndPath for each region and retain it until the end. If the cluster is small and the number of regions per rs is large, it will create too many HDFS streams at the same time. Then it is prone to failure since each datanode need to handle too many streams. Thus I come up with a new way to split log. !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg! We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we will pick the largest EntryBuffer and write it to a file (close the writer after finish). Then after we read all entries into memory, we will start a writeAndCloseThreadPool, it starts a certain number of threads to write all buffers to files. Thus it will not create HDFS streams more than *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. The biggest benefit is we can control the number of streams we create during splitting log, it will not exceeds *_hbase.regionserver.wal.max.splitters * hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is *_hbase.regionserver.wal.max.splitters * the number of region the hlog contains_*. was: The way we splitting log now is like the following figure: !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! The problem is the OutputSink will write the recovered edits during splitting log, which means it will create one WriterAndPath for each region and retain it until the end. If the cluster is small and the number of regions per rs is large, it will create too many HDFS streams at the same time. Then it is prone to failure since each datanode need to handle too many streams. Thus I come up with a new way to split log. !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg! We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we will pick the largest EntryBuffer and write it to a file (close the writer after finish). Then after we read all entries into memory, we will start a writeAndCloseThreadPool, it starts a certain number of threads to write all buffers to files. Thus it will not create HDFS streams more than *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. The biggest benefit is we can control the number of streams we create during splitting log, it will not exceeds *_hbase.regionserver.wal.max.splitters * hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is *_hbase.regionserver.wal.max.splitters * the number of region the hlog contains_*. > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than >
[jira] [Comment Edited] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16279809#comment-16279809 ] Jingyun Tian edited comment on HBASE-19358 at 1/8/18 6:27 AM: -- [~carp84] here is my test result: Split one 512MB HLog on a single regionserver !https://issues.apache.org/jira/secure/attachment/12905029/split-1-log.png! we can see in most situation new logic has a better performance than the old one. The motivation I do this improvement is when a cluster has to restart, if there are too many regions per region, the restart is prone to failure and we have to split one hlog each time to avoid errors. So I test when restart the whole cluster, how many throughput it can reach with different thread count. Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes !https://issues.apache.org/jira/secure/attachment/12904504/split_test_result.png! blue series represent the throughput of the cluster has 2 regions and regions per rs, while red series has 4 regions, regions per rs and orange series has 8 regions and per rs. This is the table if the chart is not clear: !https://issues.apache.org/jira/secure/attachment/12904508/split-table.png! Depend on this chart, I think the time cost when you restart the whole cluster is not related to the thread count. More regions this Hlog contains, more time it will cost to split. was (Author: tianjingyun): [~carp84] here is my test result: Split one 512MB HLog on a single regionserver !https://issues.apache.org/jira/secure/attachment/12904505/split-1-log.png! we can see in most situation new logic has a better performance than the old one. The motivation I do this improvement is when a cluster has to restart, if there are too many regions per region, the restart is prone to failure and we have to split one hlog each time to avoid errors. So I test when restart the whole cluster, how many throughput it can reach with different thread count. Throughput when we restart a cluster, which has 18 regionservers and 18 datanodes !https://issues.apache.org/jira/secure/attachment/12904504/split_test_result.png! blue series represent the throughput of the cluster has 2 regions and regions per rs, while red series has 4 regions, regions per rs and orange series has 8 regions and per rs. This is the table if the chart is not clear: !https://issues.apache.org/jira/secure/attachment/12904508/split-table.png! Depend on this chart, I think the time cost when you restart the whole cluster is not related to the thread count. More regions this Hlog contains, more time it will cost to split. > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12905028/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * >
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Description: The way we splitting log now is like the following figure: !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! The problem is the OutputSink will write the recovered edits during splitting log, which means it will create one WriterAndPath for each region and retain it until the end. If the cluster is small and the number of regions per rs is large, it will create too many HDFS streams at the same time. Then it is prone to failure since each datanode need to handle too many streams. Thus I come up with a new way to split log. !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg! We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we will pick the largest EntryBuffer and write it to a file (close the writer after finish). Then after we read all entries into memory, we will start a writeAndCloseThreadPool, it starts a certain number of threads to write all buffers to files. Thus it will not create HDFS streams more than *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. The biggest benefit is we can control the number of streams we create during splitting log, it will not exceeds *_hbase.regionserver.wal.max.splitters * hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is *_hbase.regionserver.wal.max.splitters * the number of region the hlog contains_*. was: The way we splitting log now is like the following figure: !https://issues.apache.org/jira/secure/attachment/12904506/split-logic-old.jpg! The problem is the OutputSink will write the recovered edits during splitting log, which means it will create one WriterAndPath for each region and retain it until the end. If the cluster is small and the number of regions per rs is large, it will create too many HDFS streams at the same time. Then it is prone to failure since each datanode need to handle too many streams. Thus I come up with a new way to split log. !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg! We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, we will pick the largest EntryBuffer and write it to a file (close the writer after finish). Then after we read all entries into memory, we will start a writeAndCloseThreadPool, it starts a certain number of threads to write all buffers to files. Thus it will not create HDFS streams more than *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. The biggest benefit is we can control the number of streams we create during splitting log, it will not exceeds *_hbase.regionserver.wal.max.splitters * hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is *_hbase.regionserver.wal.max.splitters * the number of region the hlog contains_*. > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12905027/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than >
[jira] [Updated] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jingyun Tian updated HBASE-19358: - Attachment: split_test_result.png split-1-log.png split-logic-new.jpg split-logic-old.jpg split-table.png > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch, split-1-log.png, split-logic-new.jpg, split-logic-old.jpg, > split-table.png, split_test_result.png > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12904506/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19729) Optimize UserScanQueryMatcher#mergeFilterResponse
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315725#comment-16315725 ] Guanghao Zhang commented on HBASE-19729: This is a bug fix and not a optimization? > Optimize UserScanQueryMatcher#mergeFilterResponse > -- > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315718#comment-16315718 ] Anoop Sam John commented on HBASE-19728: needsCompaction() also accessing and passing the var filesCompacting. Should we take a clone of filesCompacting and pass? (To be safe).. Did not check in passed in places, how this is been used. > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin >Assignee: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > HStore#finishCompactionRequest do not require any HStore#lock's lock so > HStore.replaceStoreFiles need to synchronized on filesCompacting. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19729) Optimize UserScanQueryMatcher#mergeFilterResponse
[ https://issues.apache.org/jira/browse/HBASE-19729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-19729: - Attachment: HBASE-19729.v1.patch > Optimize UserScanQueryMatcher#mergeFilterResponse > -- > > Key: HBASE-19729 > URL: https://issues.apache.org/jira/browse/HBASE-19729 > Project: HBase > Issue Type: Bug >Reporter: Zheng Hu >Assignee: Zheng Hu > Attachments: HBASE-19729.v1.patch > > > As we've discussed in HBASE-19696 > https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 > when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or > (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return > INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. > Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19731) TestFromClientSide#testCheckAndDeleteWithCompareOp and testNullQualifier are flakey
stack created HBASE-19731: - Summary: TestFromClientSide#testCheckAndDeleteWithCompareOp and testNullQualifier are flakey Key: HBASE-19731 URL: https://issues.apache.org/jira/browse/HBASE-19731 Project: HBase Issue Type: Sub-task Components: test Reporter: stack Assignee: stack Priority: Critical Fix For: 2.0.0-beta-2 These two tests fail frequently locally; rare does this suite pass. The failures are either of these two tests. Unfortunately, running the test standalone does not bring on the issue; need to run the whole suite. In both cases, we have a Delete followed by a Put and then a checkAnd* -type operation which does a Get expecting to find the just put Put but it fails on occasion. Looks to be an mvcc issues or Put going in at same timestamp as the Delete. Its hard to debug given any added logging seems to make it all pass again. Seems this too is new in beta-1. Running tests against alpha-4 seem to pass. Doing a compare -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315698#comment-16315698 ] stack commented on HBASE-19725: --- Thanks [~elserj]. On face of it, seems checkstyle is generating a bad jar (1.8/later zlibs are more strict apparently; exception is ''ZipException: invalid distance too far back'). Could play w/ alternate delivery of the checkstyle-suppressions.xml from a file rather than jar location if can't figure the jar packaging issue. {code}javadoc:aggregate package{code} is gobbledegook (don't forget bit where we then ask to suppress javadoc on same command-line and we throw in a failsafe verify too) so not too worried about that one. The other, install+site+assembly, is our 'standard' RC generation command-line that we've been using for 'years' so that would be good to fix (see doc on how to generate an rc). I can use the workaround of not doing checkstyle on bundle of tarball for now, just in the previous step where we populate the repo; means we get the benefit of checkstyle but skip it assembling. Currently trying alpha4. It didn't seem to have this issue. Might give us a clue here (Nothing wrong w/ Jan work agree... I love it). > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19685) Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors
[ https://issues.apache.org/jira/browse/HBASE-19685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315666#comment-16315666 ] Chia-Ping Tsai commented on HBASE-19685: Ping for reviews. Trying to cleanup all flakies for 2.0 > Fix TestFSErrorsExposed#testFullSystemBubblesFSErrors > - > > Key: HBASE-19685 > URL: https://issues.apache.org/jira/browse/HBASE-19685 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19685.v0.patch > > > {code} > java.lang.AssertionError > at > org.apache.hadoop.hbase.regionserver.TestFSErrorsExposed.testFullSystemBubblesFSErrors(TestFSErrorsExposed.java:221) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19730) Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking
[ https://issues.apache.org/jira/browse/HBASE-19730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19730: --- Attachment: 19730-branch-1.2.patch > Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt > checking > --- > > Key: HBASE-19730 > URL: https://issues.apache.org/jira/browse/HBASE-19730 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 1.2.7 > > Attachments: 19730-branch-1.2.patch > > > HBASE-14497 fixed StackOverflowError involving reverse scan. > This issue is to backport the fix to branch-1.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19730) Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking
[ https://issues.apache.org/jira/browse/HBASE-19730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19730: --- Status: Patch Available (was: Open) > Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt > checking > --- > > Key: HBASE-19730 > URL: https://issues.apache.org/jira/browse/HBASE-19730 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 1.2.7 > > Attachments: 19730-branch-1.2.patch > > > HBASE-14497 fixed StackOverflowError involving reverse scan. > This issue is to backport the fix to branch-1.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-19730) Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking
[ https://issues.apache.org/jira/browse/HBASE-19730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu reassigned HBASE-19730: -- Assignee: Ted Yu > Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt > checking > --- > > Key: HBASE-19730 > URL: https://issues.apache.org/jira/browse/HBASE-19730 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu >Assignee: Ted Yu > Fix For: 1.2.7 > > > HBASE-14497 fixed StackOverflowError involving reverse scan. > This issue is to backport the fix to branch-1.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19730) Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking
Ted Yu created HBASE-19730: -- Summary: Backport HBASE-14497 Reverse Scan threw StackOverflow caused by readPt checking Key: HBASE-19730 URL: https://issues.apache.org/jira/browse/HBASE-19730 Project: HBase Issue Type: Bug Reporter: Ted Yu Fix For: 1.2.7 HBASE-14497 fixed StackOverflowError involving reverse scan. This issue is to backport the fix to branch-1.2 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315622#comment-16315622 ] Yu Li edited comment on HBASE-19728 at 1/8/18 3:28 AM: --- +1 was (Author: carp84): +1 Actually there're more places where {{filesCompacting}} is accessed w/o any {{lock.readLock()/writeLock()}} protection such as lines at bottom of the {{compactRecentForTestingAssumingDefaultPolicy}} method. > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin >Assignee: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > HStore#finishCompactionRequest do not require any HStore#lock's lock so > HStore.replaceStoreFiles need to synchronized on filesCompacting. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315622#comment-16315622 ] Yu Li edited comment on HBASE-19728 at 1/8/18 3:27 AM: --- +1 Actually there're more places where {{filesCompacting}} is accessed w/o any {{lock.readLock()/writeLock()}} protection such as lines at bottom of the {{compactRecentForTestingAssumingDefaultPolicy}} method. was (Author: carp84): +1 Actually there're more places where {{filesCompacting}} is accessed w/o any lock protection such as lines at bottom of the {{compactRecentForTestingAssumingDefaultPolicy}} method. > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin >Assignee: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > HStore#finishCompactionRequest do not require any HStore#lock's lock so > HStore.replaceStoreFiles need to synchronized on filesCompacting. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315622#comment-16315622 ] Yu Li commented on HBASE-19728: --- +1 Actually there're more places where {{filesCompacting}} is accessed w/o any lock protection such as lines at bottom of the {{compactRecentForTestingAssumingDefaultPolicy}} method. > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin >Assignee: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > HStore#finishCompactionRequest do not require any HStore#lock's lock so > HStore.replaceStoreFiles need to synchronized on filesCompacting. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns
[ https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315617#comment-16315617 ] Zheng Hu commented on HBASE-19696: -- Filed issue HBASE-19729 for the mergeFilterResponse optimization. > Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when > scan has explicit columns > > > Key: HBASE-19696 > URL: https://issues.apache.org/jira/browse/HBASE-19696 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19696.patch, HBASE-19696_v1.patch, > HBASE-19696_v2.patch > > > INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell > if the scan has explicit columns. > This is because we use a column hint from a column tracker to prepare a cell > for seeking to next column but we are not updating column tracker with next > column when filter returns INCLUDE_AND_NEXT_COL. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19729) Optimize UserScanQueryMatcher#mergeFilterResponse
Zheng Hu created HBASE-19729: Summary: Optimize UserScanQueryMatcher#mergeFilterResponse Key: HBASE-19729 URL: https://issues.apache.org/jira/browse/HBASE-19729 Project: HBase Issue Type: Bug Reporter: Zheng Hu Assignee: Zheng Hu As we've discussed in HBASE-19696 https://issues.apache.org/jira/browse/HBASE-19696?focusedCommentId=16309644=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16309644 when (filterResponse, matchCode) = (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE) or (INCLUDE_AND_SEEK_NEXT_ROW, INCLUDE_AND_NEXT_COL) , we should return INCLUDE_AND_SEEK_NEXT_ROW as the merged match code. Will upload patches for all branches. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315614#comment-16315614 ] Ted Yu commented on HBASE-19728: lgtm It would be nice if a test can be added. > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin >Assignee: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > HStore#finishCompactionRequest do not require any HStore#lock's lock so > HStore.replaceStoreFiles need to synchronized on filesCompacting. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HBASE-19358) Improve the stability of splitting log when do fail over
[ https://issues.apache.org/jira/browse/HBASE-19358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yu Li resolved HBASE-19358. --- Resolution: Fixed Hadoop Flags: Reviewed Release Note: After HBASE-19358 we introduced a new property hbase.split.writer.creation.bounded to limit the opening writers for each WALSplitter. If set to true, we won't open any writer for recovered.edits until the entries accumulated in memory reaching hbase.regionserver.hlog.splitlog.buffersize (which defaults at 128M) and will write and close the file in one go instead of keeping the writer open. It's false by default and we recommend to set it to true if your cluster has a high region load (like more than 300 regions per RS), especially when you observed obvious NN/HDFS slow down during hbase (single RS or cluster) failover. Thanks for the note boss [~andrew.purt...@gmail.com] and will notice next time (the branch-2-v2 patch was quite close for commit but I was interrupted by something else thus left it over, my bad...) Pushed into branch-2 and added some release note. Please check the release note and feel free to amend it if necessary [~tianjingyun] [~Apache9], thanks. Closing issue, thanks all for review. > Improve the stability of splitting log when do fail over > > > Key: HBASE-19358 > URL: https://issues.apache.org/jira/browse/HBASE-19358 > Project: HBase > Issue Type: Improvement > Components: MTTR >Affects Versions: 0.98.24 >Reporter: Jingyun Tian >Assignee: Jingyun Tian > Fix For: 1.4.1, 1.5.0, 2.0.0-beta-2 > > Attachments: HBASE-18619-branch-2-v2.patch, > HBASE-19358-branch-1-v2.patch, HBASE-19358-branch-1-v3.patch, > HBASE-19358-branch-1.patch, HBASE-19358-branch-2-v3.patch, > HBASE-19358-v1.patch, HBASE-19358-v4.patch, HBASE-19358-v5.patch, > HBASE-19358-v6.patch, HBASE-19358-v7.patch, HBASE-19358-v8.patch, > HBASE-19358.patch > > > The way we splitting log now is like the following figure: > !https://issues.apache.org/jira/secure/attachment/12904506/split-logic-old.jpg! > The problem is the OutputSink will write the recovered edits during splitting > log, which means it will create one WriterAndPath for each region and retain > it until the end. If the cluster is small and the number of regions per rs is > large, it will create too many HDFS streams at the same time. Then it is > prone to failure since each datanode need to handle too many streams. > Thus I come up with a new way to split log. > !https://issues.apache.org/jira/secure/attachment/12904507/split-logic-new.jpg! > We try to cache all the recovered edits, but if it exceeds the MaxHeapUsage, > we will pick the largest EntryBuffer and write it to a file (close the writer > after finish). Then after we read all entries into memory, we will start a > writeAndCloseThreadPool, it starts a certain number of threads to write all > buffers to files. Thus it will not create HDFS streams more than > *_hbase.regionserver.hlog.splitlog.writer.threads_* we set. > The biggest benefit is we can control the number of streams we create during > splitting log, > it will not exceeds *_hbase.regionserver.wal.max.splitters * > hbase.regionserver.hlog.splitlog.writer.threads_*, but before it is > *_hbase.regionserver.wal.max.splitters * the number of region the hlog > contains_*. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign
[ https://issues.apache.org/jira/browse/HBASE-19726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315602#comment-16315602 ] Duo Zhang commented on HBASE-19726: --- This seems also because of the problematic zk, but I tend to keep it open since the code is still a bit confusing to me. > Failed to start HMaster due to infinite retrying on meta assign > --- > > Key: HBASE-19726 > URL: https://issues.apache.org/jira/browse/HBASE-19726 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang > > This is what I got at first, an exception when trying to write something to > meta when meta has not been onlined yet. > {noformat} > 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running > RecoverMetaProcedure to ensure proper hbase:meta deploy. > 2018-01-07,21:03:14,637 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, > state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true > 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896 > belongs to an existing region server > 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: > Log folder > hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232 > belongs to an existing region server > 2018-01-07,21:03:14,648 INFO > org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, > state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure > failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null > 2018-01-07,21:03:14,653 INFO > org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized > subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; > AssignProcedure table=hbase:meta, region=1588230740}] > 2018-01-07,21:03:14,660 INFO > org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740 > 2018-01-07,21:03:14,663 INFO > org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, > ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure > table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; > forceNewPlan=false, retain=false > 2018-01-07,21:03:14,831 INFO > org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta > (replicaId=0) location in ZooKeeper as > c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,841 INFO > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch > pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure > table=hbase:meta, region=1588230740; rit=OPENING, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > 2018-01-07,21:03:14,992 INFO > org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using > procedure batch rpc execution for > serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728 > 2018-01-07,21:03:15,593 ERROR > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 > location for > {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514} > 2018-01-07,21:03:15,594 WARN > org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: > Retryable error trying to transition: pid=2, ppid=1, > state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, > region=1588230740; rit=OPEN, > location=c4-hadoop-tst-st27.bj,38900,1515330173896 > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 > action: IOException: 1 time, servers with issues: null > at > org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54) > at > org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250) > at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457) > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570) > at > org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450) > at > org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1439) > at > org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1785) > at > org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1151) > at > org.apache.hadoop.hbase.master.TableStateManager.udpateMetaState(TableStateManager.java:183) > at >
[jira] [Resolved] (HBASE-19727) RS keeps calling reportForDuty to backup HMaster and can not online properly
[ https://issues.apache.org/jira/browse/HBASE-19727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-19727. --- Resolution: Invalid Probably because I've been using a problematic zookeeper. Close a Invalid for now. > RS keeps calling reportForDuty to backup HMaster and can not online properly > > > Key: HBASE-19727 > URL: https://issues.apache.org/jira/browse/HBASE-19727 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang > > I have started a cluster with 5 RSes, but after only two RSes were online. > I've checked one of the logs, it keeps trying to report to the backup HMaster. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binlijin reassigned HBASE-19728: Assignee: binlijin > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin >Assignee: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > HStore#finishCompactionRequest do not require any HStore#lock's lock so > HStore.replaceStoreFiles need to synchronized on filesCompacting. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binlijin updated HBASE-19728: - Description: We find regionserver abort with the following exception: 2017-05-09 17:40:06,369 FATAL [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] regionserver.HRegionServer: ABORTING region server hadoop0349.et2.tbsite.net,16020,1493026637177: Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] throw uncaught exception java.lang.ArrayIndexOutOfBoundsException     at java.lang.System.arraycopy(Native Method)     at java.util.ArrayList.batchRemove(ArrayList.java:726)     at java.util.ArrayList.removeAll(ArrayList.java:690)     at org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666)     at org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)     at java.lang.Thread.run(Thread.java:834) 2017-05-08 21:15:31,979 FATAL [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] regionserver.HRegionServer: ABORTING region server hadoop1191.et2.tbsite.net,16020,1493196567798: Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] throw uncaught exception java.lang.IllegalArgumentException     at com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)     at org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64)     at org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72)     at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117)     at org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)     at java.lang.Thread.run(Thread.java:834) HStore#finishCompactionRequest do not require any HStore#lock's lock so HStore.replaceStoreFiles need to synchronized on filesCompacting. was: We find regionserver abort with the following exception: 2017-05-09 17:40:06,369 FATAL [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] regionserver.HRegionServer: ABORTING region server hadoop0349.et2.tbsite.net,16020,1493026637177: Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] throw uncaught exception java.lang.ArrayIndexOutOfBoundsException     at java.lang.System.arraycopy(Native Method)     at java.util.ArrayList.batchRemove(ArrayList.java:726)     at java.util.ArrayList.removeAll(ArrayList.java:690)     at org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666)     at org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)     at java.lang.Thread.run(Thread.java:834) 2017-05-08 21:15:31,979 FATAL [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] regionserver.HRegionServer: ABORTING region server hadoop1191.et2.tbsite.net,16020,1493196567798: Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] throw uncaught exception java.lang.IllegalArgumentException     at com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)     at org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64)     at
[jira] [Updated] (HBASE-19728) Add lock to filesCompacting in all place.
[ https://issues.apache.org/jira/browse/HBASE-19728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] binlijin updated HBASE-19728: - Attachment: HBASE-19728.master.001.patch > Add lock to filesCompacting in all place. > - > > Key: HBASE-19728 > URL: https://issues.apache.org/jira/browse/HBASE-19728 > Project: HBase > Issue Type: Bug >Reporter: binlijin > Attachments: HBASE-19728.master.001.patch > > > We find regionserver abort with the following exception: > 2017-05-09 17:40:06,369 FATAL > [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] > regionserver.HRegionServer: ABORTING region server > hadoop0349.et2.tbsite.net,16020,1493026637177: > Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] > throw uncaught exception > java.lang.ArrayIndexOutOfBoundsException > Â Â Â Â at java.lang.System.arraycopy(Native Method) > Â Â Â Â at java.util.ArrayList.batchRemove(ArrayList.java:726) > Â Â Â Â at java.util.ArrayList.removeAll(ArrayList.java:690) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) > 2017-05-08 21:15:31,979 FATAL > [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] > regionserver.HRegionServer: ABORTING region server > hadoop1191.et2.tbsite.net,16020,1493196567798: > Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] > throw uncaught exception > java.lang.IllegalArgumentException > Â Â Â Â at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58) > Â Â Â Â at > org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147) > Â Â Â Â at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) > Â Â Â Â at java.lang.Thread.run(Thread.java:834) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19728) Add lock to filesCompacting in all place.
binlijin created HBASE-19728: Summary: Add lock to filesCompacting in all place. Key: HBASE-19728 URL: https://issues.apache.org/jira/browse/HBASE-19728 Project: HBase Issue Type: Bug Reporter: binlijin We find regionserver abort with the following exception: 2017-05-09 17:40:06,369 FATAL [regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275] regionserver.HRegionServer: ABORTING region server hadoop0349.et2.tbsite.net,16020,1493026637177: Thread[regionserver/hadoop0349.et2.tbsite.net/11.251.152.199:16020-shortCompactions-1493026663275,5,main] throw uncaught exception java.lang.ArrayIndexOutOfBoundsException     at java.lang.System.arraycopy(Native Method)     at java.util.ArrayList.batchRemove(ArrayList.java:726)     at java.util.ArrayList.removeAll(ArrayList.java:690)     at org.apache.hadoop.hbase.regionserver.HStore.finishCompactionRequest(HStore.java:1666)     at org.apache.hadoop.hbase.regionserver.HStore.cancelRequestedCompaction(HStore.java:1656)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:504)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)     at java.lang.Thread.run(Thread.java:834) 2017-05-08 21:15:31,979 FATAL [regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978] regionserver.HRegionServer: ABORTING region server hadoop1191.et2.tbsite.net,16020,1493196567798: Thread[regionserver/hadoop1191.et2.tbsite.net/11.251.159.40:16020-longCompactions-1494249331978,5,main] throw uncaught exception java.lang.IllegalArgumentException     at com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)     at org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.getCurrentEligibleFiles(RatioBasedCompactionPolicy.java:64)     at org.apache.hadoop.hbase.regionserver.compactions.RatioBasedCompactionPolicy.preSelectCompactionForCoprocessor(RatioBasedCompactionPolicy.java:72)     at org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.preSelect(DefaultStoreEngine.java:117)     at org.apache.hadoop.hbase.regionserver.HStore.requestCompaction(HStore.java:1542)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread.selectCompaction(CompactSplitThread.java:362)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread.access$200(CompactSplitThread.java:58)     at org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:491)     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)     at java.lang.Thread.run(Thread.java:834) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19712) Fix TestSnapshotQuotaObserverChore#testSnapshotSize
[ https://issues.apache.org/jira/browse/HBASE-19712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19712: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for the reviews. [~elserj] > Fix TestSnapshotQuotaObserverChore#testSnapshotSize > --- > > Key: HBASE-19712 > URL: https://issues.apache.org/jira/browse/HBASE-19712 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19712.v0.patch, HBASE-19712.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19717) IntegrationTestDDLMasterFailover is using outdated values for DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Josh Elser updated HBASE-19717: --- Reporter: Romil Choksi (was: Sergey Soldatov) > IntegrationTestDDLMasterFailover is using outdated values for > DataBlockEncoding > --- > > Key: HBASE-19717 > URL: https://issues.apache.org/jira/browse/HBASE-19717 > Project: HBase > Issue Type: Bug > Components: integration tests >Affects Versions: 2.0.0-beta-1 >Reporter: Romil Choksi >Assignee: Sergey Soldatov > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19717-branch-2.patch > > > We have removed PREFIX_TREE data block encoding, but > IntegrationTestDDLMasterFailover is still using it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19717) IntegrationTestDDLMasterFailover is using outdated values for DataBlockEncoding
[ https://issues.apache.org/jira/browse/HBASE-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315505#comment-16315505 ] Josh Elser commented on HBASE-19717: Updated reporter to be our Romil (came from $dayjob testing). > IntegrationTestDDLMasterFailover is using outdated values for > DataBlockEncoding > --- > > Key: HBASE-19717 > URL: https://issues.apache.org/jira/browse/HBASE-19717 > Project: HBase > Issue Type: Bug > Components: integration tests >Affects Versions: 2.0.0-beta-1 >Reporter: Romil Choksi >Assignee: Sergey Soldatov > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19717-branch-2.patch > > > We have removed PREFIX_TREE data block encoding, but > IntegrationTestDDLMasterFailover is still using it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19714) `status 'detailed'` invokes nonexistent "getRegionsInTransition" method on ClusterStatus
[ https://issues.apache.org/jira/browse/HBASE-19714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315503#comment-16315503 ] Josh Elser commented on HBASE-19714: Thanks all! > `status 'detailed'` invokes nonexistent "getRegionsInTransition" method on > ClusterStatus > > > Key: HBASE-19714 > URL: https://issues.apache.org/jira/browse/HBASE-19714 > Project: HBase > Issue Type: Bug > Components: shell >Reporter: Josh Elser >Assignee: Josh Elser >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19714.001.branch-2.patch, > HBASE-19714.002.branch-2.patch > > > {noformat} > hbase(main):003:0> status 'detailed' > version 2.0.0-beta-1 > ERROR: undefined method `getRegionsInTransition' for > # > Did you mean? get_region_states_in_transition >getRegionStatesInTransition > Show cluster status. Can be 'summary', 'simple', 'detailed', or > 'replication'. The > default is 'summary'. Examples: > hbase> status > hbase> status 'simple' > hbase> status 'summary' > hbase> status 'detailed' > hbase> status 'replication' > hbase> status 'replication', 'source' > hbase> status 'replication', 'sink' > Took 0.1814 seconds > {noformat} > Looks like the method is now {{getRegionStatesInTransition}} instead of > {{getRegionsInTransition}}. > FYI [~stack]. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19712) Fix TestSnapshotQuotaObserverChore#testSnapshotSize
[ https://issues.apache.org/jira/browse/HBASE-19712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315502#comment-16315502 ] Josh Elser commented on HBASE-19712: LGTM Thanks for taking the time to fix this! > Fix TestSnapshotQuotaObserverChore#testSnapshotSize > --- > > Key: HBASE-19712 > URL: https://issues.apache.org/jira/browse/HBASE-19712 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19712.v0.patch, HBASE-19712.v1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315494#comment-16315494 ] Josh Elser commented on HBASE-19725: I spent a bit of time digging into this issue (at least, the one that Ted filed) over Xmas break. Sadly, to no avail. I'm not entirely convinced this isn't a Maven issue itself. I'm guessing that both: {{mvn javadoc:aggregate package}} and {{mvn install assembly:single -Papache-release}} are both triggering some similar codepath. I didn't see anything obviously wrong with the changes that Jan had made. > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-7003) Move remaining examples into hbase-examples
[ https://issues.apache.org/jira/browse/HBASE-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315379#comment-16315379 ] Hadoop QA commented on HBASE-7003: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 0s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} The patch hbase-checkstyle passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} hbase-server: The patch generated 0 new + 0 unchanged - 27 fixed = 0 total (was 27) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s{color} | {color:red} hbase-examples: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 34s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 19m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s{color} | {color:green} hbase-checkstyle in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green}100m 21s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s{color} | {color:green} hbase-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 50s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-7003 | | JIRA Patch URL |
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315378#comment-16315378 ] stack commented on HBASE-19725: --- Ted's maven goal salad is no good to me; it is missing 'site'. For beta1, I can skip checkstyle report for now while figure what's up here. > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19690) Maven assembly goal fails in master branch with 'invalid distance too far back'
[ https://issues.apache.org/jira/browse/HBASE-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315375#comment-16315375 ] stack commented on HBASE-19690: --- You don't have a site target so no good to me. > Maven assembly goal fails in master branch with 'invalid distance too far > back' > --- > > Key: HBASE-19690 > URL: https://issues.apache.org/jira/browse/HBASE-19690 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu > > I use the following command in master branch : > {code} > mvn clean verify install javadoc:aggregate package assembly:single > -DskipTests=true -Dmaven.javadoc.skip=true -Dhadoop.profile=3.0 > {code} > It fails with the following error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle execution: Unable to > process suppressions file location: hbase/checkstyle-suppressions.xml: Cannot > create file-based resource:invalid distance too far back -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19690) Maven assembly goal fails in master branch with 'invalid distance too far back'
[ https://issues.apache.org/jira/browse/HBASE-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315363#comment-16315363 ] stack commented on HBASE-19690: --- Why verify? Why aggregate javadoc when pom does that? Then why skip javadoc? Where did this assortment of a command come from? It worked from scratch or was it a continuation? It works for default profile? I can try but this unorthodox and it takes time building and perhaps you have answers? > Maven assembly goal fails in master branch with 'invalid distance too far > back' > --- > > Key: HBASE-19690 > URL: https://issues.apache.org/jira/browse/HBASE-19690 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu > > I use the following command in master branch : > {code} > mvn clean verify install javadoc:aggregate package assembly:single > -DskipTests=true -Dmaven.javadoc.skip=true -Dhadoop.profile=3.0 > {code} > It fails with the following error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle execution: Unable to > process suppressions file location: hbase/checkstyle-suppressions.xml: Cannot > create file-based resource:invalid distance too far back -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HBASE-19690) Maven assembly goal fails in master branch with 'invalid distance too far back'
[ https://issues.apache.org/jira/browse/HBASE-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16309102#comment-16309102 ] Ted Yu edited comment on HBASE-19690 at 1/7/18 4:03 PM: The command passed by dropping package goal (on master branch). Here was the command: {code} mvn -X verify install javadoc:aggregate assembly:single -DskipTests=true -Dmaven.javadoc.skip=true -Dhadoop.profile=3.0 {code} I was to build tar ball for testing. was (Author: yuzhih...@gmail.com): The command passed by dropping package goal. > Maven assembly goal fails in master branch with 'invalid distance too far > back' > --- > > Key: HBASE-19690 > URL: https://issues.apache.org/jira/browse/HBASE-19690 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu > > I use the following command in master branch : > {code} > mvn clean verify install javadoc:aggregate package assembly:single > -DskipTests=true -Dmaven.javadoc.skip=true -Dhadoop.profile=3.0 > {code} > It fails with the following error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle execution: Unable to > process suppressions file location: hbase/checkstyle-suppressions.xml: Cannot > create file-based resource:invalid distance too far back -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns
[ https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-19696: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: (was: 2.0.0-beta-2) 2.0.0-beta-1 Status: Resolved (was: Patch Available) Thanks for the patch, Ankit. Thanks all for the reviews. > Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when > scan has explicit columns > > > Key: HBASE-19696 > URL: https://issues.apache.org/jira/browse/HBASE-19696 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Critical > Fix For: 2.0.0-beta-1 > > Attachments: HBASE-19696.patch, HBASE-19696_v1.patch, > HBASE-19696_v2.patch > > > INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell > if the scan has explicit columns. > This is because we use a column hint from a column tracker to prepare a cell > for seeking to next column but we are not updating column tracker with next > column when filter returns INCLUDE_AND_NEXT_COL. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19424) Metrics servlet doesn't work on head of branch-1
[ https://issues.apache.org/jira/browse/HBASE-19424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315342#comment-16315342 ] Toshihiro Suzuki commented on HBASE-19424: -- [~appy] Yes. I applied the patch to my env and then the NullPointerException didn't occur. > Metrics servlet doesn't work on head of branch-1 > > > Key: HBASE-19424 > URL: https://issues.apache.org/jira/browse/HBASE-19424 > Project: HBase > Issue Type: Bug >Affects Versions: 1.4.0 >Reporter: Andrew Purtell >Assignee: Toshihiro Suzuki >Priority: Minor > Fix For: 1.4.1, 1.5.0 > > Attachments: HBASE-19424.branch-1.patch > > > In branch-1 at least we put up a servlet on "/metrics" that is Hadoop's > MetricsServlet. However HBase users are expected to pick up metrics via > "/jmx". We don't mention "/metrics" or link to it on the UI. If you attempt > to access "/metrics" with head of branch-1 it errors out due to a NPE > {noformat} > 2017-12-04 16:06:37,403 ERROR [1874557409@qtp-1910896157-3] mortbay.log: > /metrics > java.lang.NullPointerException > at > org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1049) > at > org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19196) Release hbase-2.0.0-beta-1; the "Finish-line" release
[ https://issues.apache.org/jira/browse/HBASE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315337#comment-16315337 ] stack commented on HBASE-19196: --- Having trouble making new RC. Build fails: HBASE-19725. Up on dev list, the flakies are back to burn us. Looking at the these... > Release hbase-2.0.0-beta-1; the "Finish-line" release > - > > Key: HBASE-19196 > URL: https://issues.apache.org/jira/browse/HBASE-19196 > Project: HBase > Issue Type: Bug >Reporter: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > APIs done, but external facing and Coprocessors. Done w/ features. Bug fixes > only from here on out. There'll be a beta-2 but that is about rolling upgrade > and bug fixes only. Then our first 2.0.0 Release Candidate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315335#comment-16315335 ] stack commented on HBASE-19725: --- Looking at Misty's fancy script at dev-support/jenkins-scripts/generate-hbase-website.sh, she does install in one step then site in following. Building tgz with assembly, we need install + site on the line along w/ assembly. Passing -Dcheckstyle.skip=true I can get the tgz to build (its still running). Means we don't have checkstyle report but we'll have checkstyle benefit because the tgz is done after a clean install to check all good. > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19696) Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when scan has explicit columns
[ https://issues.apache.org/jira/browse/HBASE-19696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315333#comment-16315333 ] Anoop Sam John commented on HBASE-19696: +1 > Filter returning INCLUDE_AND_NEXT_COL doesn't skip remaining versions when > scan has explicit columns > > > Key: HBASE-19696 > URL: https://issues.apache.org/jira/browse/HBASE-19696 > Project: HBase > Issue Type: Bug >Affects Versions: 2.0.0-beta-1 >Reporter: Ankit Singhal >Assignee: Ankit Singhal >Priority: Critical > Fix For: 2.0.0-beta-2 > > Attachments: HBASE-19696.patch, HBASE-19696_v1.patch, > HBASE-19696_v2.patch > > > INCLUDE_AND_NEXT_COL from filter doesn't skip remaining versions of the cell > if the scan has explicit columns. > This is because we use a column hint from a column tracker to prepare a cell > for seeking to next column but we are not updating column tracker with next > column when filter returns INCLUDE_AND_NEXT_COL. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19541) Remove unnecessary semicolons in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-19541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315327#comment-16315327 ] Hadoop QA commented on HBASE-19541: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 38 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 4s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} hbase-server: The patch generated 0 new + 854 unchanged - 12 fixed = 854 total (was 866) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 36s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 8s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 96m 40s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19541 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904977/HBASE-19541.master.003.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 5fa0dd22aa8d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 8a5b1538c8 | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10923/testReport/ | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10923/console | | Powered by | Apache Yetus 0.6.0 http://yetus.apache.org | This message was automatically generated. > Remove unnecessary semicolons in hbase-server > - > > Key: HBASE-19541 >
[jira] [Commented] (HBASE-19713) Enable TestInterfaceAudienceAnnotations
[ https://issues.apache.org/jira/browse/HBASE-19713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315322#comment-16315322 ] Hadoop QA commented on HBASE-19713: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 12 new or modified test files. {color} | || || || || {color:brown} branch-2 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 33s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 43s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 8m 45s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 24s{color} | {color:green} branch-2 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} The patch hbase-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch hbase-hadoop-compat passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hbase-client: The patch generated 0 new + 66 unchanged - 3 fixed = 66 total (was 69) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch hbase-zookeeper passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch hbase-replication passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} hbase-http: The patch generated 0 new + 27 unchanged - 4 fixed = 27 total (was 31) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 18s{color} | {color:green} hbase-server: The patch generated 0 new + 912 unchanged - 19 fixed = 912 total (was 931) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} hbase-mapreduce: The patch generated 0 new + 141 unchanged - 1 fixed = 141 total (was 142) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} The patch hbase-thrift passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} hbase-endpoint: The patch generated 0 new + 15 unchanged - 2 fixed = 15 total (was 17) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} The patch hbase-it passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} The patch hbase-rest passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} The patch hbase-examples passed checkstyle {color} | |
[jira] [Commented] (HBASE-7003) Move remaining examples into hbase-examples
[ https://issues.apache.org/jira/browse/HBASE-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315316#comment-16315316 ] Jan Hentschel commented on HBASE-7003: -- -002: Adressed Checkstyle and whitespace issues found in build. > Move remaining examples into hbase-examples > --- > > Key: HBASE-7003 > URL: https://issues.apache.org/jira/browse/HBASE-7003 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.95.2 >Reporter: Sergey Shelukhin >Assignee: Jan Hentschel > Labels: beginner > Attachments: HBASE-7003.master.001.patch, HBASE-7003.master.002.patch > > > There's still thrift2 directory under non-built examples; there are also some > examples noted in the original jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-7003) Move remaining examples into hbase-examples
[ https://issues.apache.org/jira/browse/HBASE-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-7003: - Attachment: HBASE-7003.master.002.patch > Move remaining examples into hbase-examples > --- > > Key: HBASE-7003 > URL: https://issues.apache.org/jira/browse/HBASE-7003 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.95.2 >Reporter: Sergey Shelukhin >Assignee: Jan Hentschel > Labels: beginner > Attachments: HBASE-7003.master.001.patch, HBASE-7003.master.002.patch > > > There's still thrift2 directory under non-built examples; there are also some > examples noted in the original jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19720) Rename WALKey#getTabnename to WALKey#getTableName
[ https://issues.apache.org/jira/browse/HBASE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315296#comment-16315296 ] Hadoop QA commented on HBASE-19720: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 6m 12s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 8s{color} | {color:green} hbase-server: The patch generated 0 new + 115 unchanged - 2 fixed = 115 total (was 117) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} The patch hbase-mapreduce passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 48s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 20m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}114m 59s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 38s{color} | {color:green} hbase-mapreduce in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 37s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}169m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19720 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904973/HBASE-19720.v0.patch | | Optional Tests | asflicense javac javadoc unit findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux d9eecc368970 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh | | git revision | master / 228d7a5a46 | | maven | version: Apache
[jira] [Updated] (HBASE-19674) make_patch.sh version increment fails
[ https://issues.apache.org/jira/browse/HBASE-19674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19674: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) +1. Committed to master. [~nielsbasjes] Thanks for the patch. > make_patch.sh version increment fails > - > > Key: HBASE-19674 > URL: https://issues.apache.org/jira/browse/HBASE-19674 > Project: HBase > Issue Type: Improvement >Reporter: Niels Basjes >Assignee: Niels Basjes > Fix For: 3.0.0 > > Attachments: HBASE-19674.20171230-131310.patch, > HBASE-19674.20171230-152443.patch, HBASE-19674.20180103-160831.patch > > > I have 5 things in the {{make_patch.sh}} script where I see room for > improvement: > 1) BUG: > Assume my working branch is called {{HBASE-19673}} > Now if I run > {{dev-support/make_patch.sh -b origin/branch-1}} > a patch is created with the name > {{~/patches/HBASE-19673.v1.branch-1.patch}} > When I run the same command again the version is not incremented. > The reason is that the script checks for {{HBASE-19673.v1.patch}} which is > without the branch name. > 2) Messy: The first patch created does NOT include the version tag at all. > 3) Messy: The version starts with '1' so when we reach patch '10' they will > be ordered incorrectly in Jira (which is based on name) > 4) New feature: I personally prefer using the timestamp as the 'version' of > the patch because these are much easier to order. > 5) Messy: If you for example only have one file {{HBASE-19674.v05.patch}} > then the next file generated will be {{HBASE-19674.v01.patch}} instead of the > expected {{HBASE-19674.v06.patch}} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315284#comment-16315284 ] stack commented on HBASE-19725: --- Excluding checkstyle report in top-level pom doesn't work. > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19723) hbase-thrift declares slf4j-api twice
[ https://issues.apache.org/jira/browse/HBASE-19723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315280#comment-16315280 ] Hudson commented on HBASE-19723: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4360 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4360/]) HBASE-19723 Removed duplicated dependency slf4j-api from hbase-thrift (jan.hentschel: rev 8a5b1538c820cc04782087fb233f668aa4c1c13a) * (edit) hbase-thrift/pom.xml > hbase-thrift declares slf4j-api twice > - > > Key: HBASE-19723 > URL: https://issues.apache.org/jira/browse/HBASE-19723 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0 >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HBASE-19723.master.001.patch > > > Currently *hbase-thrift* declares the dependency {{slf4j-api}} twice > resulting in the following warning > {code} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-thrift:jar:3.0.0-SNAPSHOT > [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must > be unique: org.slf4j:slf4j-api:jar -> duplicate declaration of version (?) @ > org.apache.hbase:hbase-thrift:[unknown-version], > /Users/jan/Documents/Projects/github/hbase/hbase-thrift/pom.xml, line 250, > column 17 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > {code} > One should be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19471) Fix remaining Checkstyle errors in hbase-thrift
[ https://issues.apache.org/jira/browse/HBASE-19471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315279#comment-16315279 ] Hudson commented on HBASE-19471: FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4360 (See [https://builds.apache.org/job/HBase-Trunk_matrix/4360/]) HBASE-19471 Fixed remaining Checkstyle errors in hbase-thrift (jan.hentschel: rev 830179600df8b4f254709aaf4cbf6afc9a548268) * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/TBoundedThreadPoolServer.java * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestCallQueue.java * (edit) hbase-thrift/pom.xml * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftUtilities.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/IncrementCoalescer.java * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java * (edit) hbase-checkstyle/src/main/resources/hbase/checkstyle-suppressions.xml * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServerCmdLine.java * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftHttpServer.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/HbaseHandlerMetricsProxy.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServerRunner.java * (edit) hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift/TestThriftServer.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftServer.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftServer.java * (edit) hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/ThriftMetrics.java > Fix remaining Checkstyle errors in hbase-thrift > --- > > Key: HBASE-19471 > URL: https://issues.apache.org/jira/browse/HBASE-19471 > Project: HBase > Issue Type: Sub-task > Components: Thrift >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-19471.master.001.patch, > HBASE-19471.master.002.patch > > > Some Checkstyle errors are left in the *hbase-thrift* module and should be > fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19690) Maven assembly goal fails in master branch with 'invalid distance too far back'
[ https://issues.apache.org/jira/browse/HBASE-19690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315267#comment-16315267 ] stack commented on HBASE-19690: --- What 'passed' here? Resume with -rf :hbase-error-prone or build from scratch? See HBASE-19725 > Maven assembly goal fails in master branch with 'invalid distance too far > back' > --- > > Key: HBASE-19690 > URL: https://issues.apache.org/jira/browse/HBASE-19690 > Project: HBase > Issue Type: Bug >Reporter: Ted Yu > > I use the following command in master branch : > {code} > mvn clean verify install javadoc:aggregate package assembly:single > -DskipTests=true -Dmaven.javadoc.skip=true -Dhadoop.profile=3.0 > {code} > It fails with the following error: > {code} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle execution: Unable to > process suppressions file location: hbase/checkstyle-suppressions.xml: Cannot > create file-based resource:invalid distance too far back -> [Help 1] > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19727) RS keeps calling reportForDuty to backup HMaster and can not online properly
[ https://issues.apache.org/jira/browse/HBASE-19727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315264#comment-16315264 ] Duo Zhang commented on HBASE-19727: --- Restart and then everything is OK. I guess there maybe a master failover in the initialization of RS and it makes the RS confused? > RS keeps calling reportForDuty to backup HMaster and can not online properly > > > Key: HBASE-19727 > URL: https://issues.apache.org/jira/browse/HBASE-19727 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang > > I have started a cluster with 5 RSes, but after only two RSes were online. > I've checked one of the logs, it keeps trying to report to the backup HMaster. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19727) RS keeps calling reportForDuty to backup HMaster and can not start properly
Duo Zhang created HBASE-19727: - Summary: RS keeps calling reportForDuty to backup HMaster and can not start properly Key: HBASE-19727 URL: https://issues.apache.org/jira/browse/HBASE-19727 Project: HBase Issue Type: Bug Reporter: Duo Zhang I have started a cluster with 5 RSes, but after only two RSes were online. I've checked one of the logs, it keeps trying to report to the backup HMaster. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19727) RS keeps calling reportForDuty to backup HMaster and can not online properly
[ https://issues.apache.org/jira/browse/HBASE-19727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-19727: -- Summary: RS keeps calling reportForDuty to backup HMaster and can not online properly (was: RS keeps calling reportForDuty to backup HMaster and can not start properly) > RS keeps calling reportForDuty to backup HMaster and can not online properly > > > Key: HBASE-19727 > URL: https://issues.apache.org/jira/browse/HBASE-19727 > Project: HBase > Issue Type: Bug >Reporter: Duo Zhang > > I have started a cluster with 5 RSes, but after only two RSes were online. > I've checked one of the logs, it keeps trying to report to the backup HMaster. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315262#comment-16315262 ] stack commented on HBASE-19725: --- [~Jan Hentschel] Thanks for chiming in. Error is same but Ted was running some arbitrary set of build commands Site needs install not package. Then claimed it worked. Doesn't say what he did. The above install + site is how we build RCs. See http://hbase.apache.org/book.html#maven.release. It worked when I did alpha RC4 so something recent. I'll keep digging > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19726) Failed to start HMaster due to infinite retrying on meta assign
Duo Zhang created HBASE-19726: - Summary: Failed to start HMaster due to infinite retrying on meta assign Key: HBASE-19726 URL: https://issues.apache.org/jira/browse/HBASE-19726 Project: HBase Issue Type: Bug Reporter: Duo Zhang This is what I got at first, an exception when trying to write something to meta when meta has not been onlined yet. {noformat} 2018-01-07,21:03:14,389 INFO org.apache.hadoop.hbase.master.HMaster: Running RecoverMetaProcedure to ensure proper hbase:meta deploy. 2018-01-07,21:03:14,637 INFO org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: Start pid=1, state=RUNNABLE:RECOVER_META_SPLIT_LOGS; RecoverMetaProcedure failedMetaServer=null, splitWal=true 2018-01-07,21:03:14,645 INFO org.apache.hadoop.hbase.master.MasterWalManager: Log folder hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st27.bj,38900,1515330173896 belongs to an existing region server 2018-01-07,21:03:14,646 INFO org.apache.hadoop.hbase.master.MasterWalManager: Log folder hdfs://c402tst-community/hbase/c402tst-community/WALs/c4-hadoop-tst-st29.bj,38900,1515330177232 belongs to an existing region server 2018-01-07,21:03:14,648 INFO org.apache.hadoop.hbase.master.procedure.RecoverMetaProcedure: pid=1, state=RUNNABLE:RECOVER_META_ASSIGN_REGIONS; RecoverMetaProcedure failedMetaServer=null, splitWal=true; Retaining meta assignment to server=null 2018-01-07,21:03:14,653 INFO org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740}] 2018-01-07,21:03:14,660 INFO org.apache.hadoop.hbase.master.procedure.MasterProcedureScheduler: pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740 hbase:meta hbase:meta,,1.1588230740 2018-01-07,21:03:14,663 INFO org.apache.hadoop.hbase.master.assignment.AssignProcedure: Start pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=hbase:meta, region=1588230740; rit=OFFLINE, location=null; forceNewPlan=false, retain=false 2018-01-07,21:03:14,831 INFO org.apache.hadoop.hbase.zookeeper.MetaTableLocator: Setting hbase:meta (replicaId=0) location in ZooKeeper as c4-hadoop-tst-st27.bj,38900,1515330173896 2018-01-07,21:03:14,841 INFO org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Dispatch pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPENING, location=c4-hadoop-tst-st27.bj,38900,1515330173896 2018-01-07,21:03:14,992 INFO org.apache.hadoop.hbase.master.procedure.RSProcedureDispatcher: Using procedure batch rpc execution for serverName=c4-hadoop-tst-st27.bj,38900,1515330173896 version=3145728 2018-01-07,21:03:15,593 ERROR org.apache.hadoop.hbase.client.AsyncRequestFutureImpl: Cannot get replica 0 location for {"totalColumns":1,"row":"hbase:meta","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":1515330195514}]},"ts":1515330195514} 2018-01-07,21:03:15,594 WARN org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: Retryable error trying to transition: pid=2, ppid=1, state=RUNNABLE:REGION_TRANSITION_FINISH; AssignProcedure table=hbase:meta, region=1588230740; rit=OPEN, location=c4-hadoop-tst-st27.bj,38900,1515330173896 org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues: null at org.apache.hadoop.hbase.client.BatchErrors.makeException(BatchErrors.java:54) at org.apache.hadoop.hbase.client.AsyncRequestFutureImpl.getErrors(AsyncRequestFutureImpl.java:1250) at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:457) at org.apache.hadoop.hbase.client.HTable.put(HTable.java:570) at org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1450) at org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1439) at org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1785) at org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1151) at org.apache.hadoop.hbase.master.TableStateManager.udpateMetaState(TableStateManager.java:183) at org.apache.hadoop.hbase.master.TableStateManager.setTableState(TableStateManager.java:69) at org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsOpened(AssignmentManager.java:1515) at org.apache.hadoop.hbase.master.assignment.AssignProcedure.finishTransition(AssignProcedure.java:271) at org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:320) at
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315249#comment-16315249 ] Jan Hentschel commented on HBASE-19725: --- Maybe it's the same problem as in HBASE-19690 where [~yuzhih...@gmail.com] had a similar issue. > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315246#comment-16315246 ] stack commented on HBASE-19725: --- [INFO] Reactor Summary: [INFO] [INFO] Apache HBase .. SUCCESS [9:28.676s] [INFO] Apache HBase - Checkstyle . SUCCESS [0.616s] [INFO] Apache HBase - Build Support .. SUCCESS [0.087s] [INFO] Apache HBase - Error Prone Rules .. FAILURE [0.113s] [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 9:31.670s [INFO] Finished at: Sun Jan 07 04:50:20 PST 2018 [INFO] Final Memory: 467M/7762M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on project hbase-error-prone: Failed during checkstyle execution: Unable to find configuration file at location: hbase/checkstyle.xml: Could not find resource 'hbase/checkstyle.xml'. -> [Help 1] Trying to skip checkstyle in reporting section and excluding site from errorprone. Takes for ever to run though. Need a skip javadoc > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19724) Fix Checkstyle errors in hbase-hadoop2-compat
[ https://issues.apache.org/jira/browse/HBASE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315243#comment-16315243 ] Hadoop QA commented on HBASE-19724: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 14 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 53s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} hbase-hadoop2-compat: The patch generated 0 new + 0 unchanged - 173 fixed = 0 total (was 173) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 33s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 21m 8s{color} | {color:green} Patch does not cause any errors with Hadoop 2.6.5 2.7.4 or 3.0.0. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} hbase-hadoop2-compat generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 37m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 | | JIRA Issue | HBASE-19724 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12904976/HBASE-19724.master.001.patch | | Optional Tests | asflicense javac javadoc unit shadedjars hadoopcheck xml compile findbugs hbaseanti checkstyle | | uname | Linux dad356782081 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh | | git revision | master / 830179600d | | maven | version: Apache Maven 3.5.2 (138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) | | Default Java | 1.8.0_151 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/10922/testReport/ | | modules | C: hbase-hadoop2-compat U: hbase-hadoop2-compat | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/10922/console | | Powered by
[jira] [Commented] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
[ https://issues.apache.org/jira/browse/HBASE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315238#comment-16315238 ] stack commented on HBASE-19725: --- [~Jan Hentschel] You might have some input sir? I'm trying to figure this one at mo. > Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid > distance too far back" > - > > Key: HBASE-19725 > URL: https://issues.apache.org/jira/browse/HBASE-19725 > Project: HBase > Issue Type: Sub-task >Reporter: stack >Assignee: stack >Priority: Blocker > Fix For: 2.0.0-beta-1 > > > Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first > time we go to use the jars made by hbase-checkstyle in the hbase-error-prone > module under 'build support' module when running the 'site' target. It is > trying to make the checkstyle report. > I see that we find the right jar to read: > [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as > jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. > But then it thinks the jar corrupt 'ZipException: invalid distance too far > back'. > Here is mvn output: > 12667058 [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on > project hbase-error-prone: Failed during checkstyle executi on: > Unable to process suppressions file location: > hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid > distance too far back -> [Help 1] > 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to > execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check > (checkstyle) on project hba se-error-prone: Failed during checkstyle > execution > I'm running this command: > mvn -X install -DskipTests site assembly:single -Papache-release -Prelease > -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository > Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; > 2015-11-10T08:41:47-08:00) > Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19725) Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back"
stack created HBASE-19725: - Summary: Build fails, unable to read hbase/checkstyle-suppressions.xml "invalid distance too far back" Key: HBASE-19725 URL: https://issues.apache.org/jira/browse/HBASE-19725 Project: HBase Issue Type: Sub-task Reporter: stack Assignee: stack Priority: Blocker Build is failing on me (Trying to cut beta-1 RC on branch-2). It is first time we go to use the jars made by hbase-checkstyle in the hbase-error-prone module under 'build support' module when running the 'site' target. It is trying to make the checkstyle report. I see that we find the right jar to read: [DEBUG] The resource 'hbase/checkstyle-suppressions.xml' was found as jar:file:/home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository/org/apache/hbase/hbase-checkstyle/2.0.0-beta-1/hbase-checkstyle-2.0.0-beta-1.jar!/hbase/checkstyle-suppressions.xml. But then it thinks the jar corrupt 'ZipException: invalid distance too far back'. Here is mvn output: 12667058 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on project hbase-error-prone: Failed during checkstyle executi on: Unable to process suppressions file location: hbase/checkstyle-suppressions.xml: Cannot create file-based resource:invalid distance too far back -> [Help 1] 12667059 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (checkstyle) on project hba se-error-prone: Failed during checkstyle execution I'm running this command: mvn -X install -DskipTests site assembly:single -Papache-release -Prelease -Dmaven.repo.local=//home/stack/rc/hbase-2.0.0-beta-1.20180107T061305Z/repository Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00) Java version: 1.8.0_151, vendor: Oracle Corporation -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19541) Remove unnecessary semicolons in hbase-server
[ https://issues.apache.org/jira/browse/HBASE-19541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19541: -- Attachment: HBASE-19541.master.003.patch > Remove unnecessary semicolons in hbase-server > - > > Key: HBASE-19541 > URL: https://issues.apache.org/jira/browse/HBASE-19541 > Project: HBase > Issue Type: Sub-task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Attachments: HBASE-19541.master.001.patch, > HBASE-19541.master.002.patch, HBASE-19541.master.003.patch > > > Currently *hbase-server* has some places with unnecessary semicolons, which > should be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19723) hbase-thrift declares slf4j-api twice
[ https://issues.apache.org/jira/browse/HBASE-19723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19723: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > hbase-thrift declares slf4j-api twice > - > > Key: HBASE-19723 > URL: https://issues.apache.org/jira/browse/HBASE-19723 > Project: HBase > Issue Type: Improvement > Components: Thrift >Affects Versions: 3.0.0 >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Trivial > Fix For: 3.0.0 > > Attachments: HBASE-19723.master.001.patch > > > Currently *hbase-thrift* declares the dependency {{slf4j-api}} twice > resulting in the following warning > {code} > [WARNING] Some problems were encountered while building the effective model > for org.apache.hbase:hbase-thrift:jar:3.0.0-SNAPSHOT > [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must > be unique: org.slf4j:slf4j-api:jar -> duplicate declaration of version (?) @ > org.apache.hbase:hbase-thrift:[unknown-version], > /Users/jan/Documents/Projects/github/hbase/hbase-thrift/pom.xml, line 250, > column 17 > [WARNING] > [WARNING] It is highly recommended to fix these problems because they > threaten the stability of your build. > [WARNING] > [WARNING] For this reason, future Maven versions might no longer support > building such malformed projects. > {code} > One should be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19471) Fix remaining Checkstyle errors in hbase-thrift
[ https://issues.apache.org/jira/browse/HBASE-19471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19471: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > Fix remaining Checkstyle errors in hbase-thrift > --- > > Key: HBASE-19471 > URL: https://issues.apache.org/jira/browse/HBASE-19471 > Project: HBase > Issue Type: Sub-task > Components: Thrift >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Fix For: 3.0.0 > > Attachments: HBASE-19471.master.001.patch, > HBASE-19471.master.002.patch > > > Some Checkstyle errors are left in the *hbase-thrift* module and should be > fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19724) Fix Checkstyle errors in hbase-hadoop2-compat
[ https://issues.apache.org/jira/browse/HBASE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19724: -- Attachment: HBASE-19724.master.001.patch > Fix Checkstyle errors in hbase-hadoop2-compat > - > > Key: HBASE-19724 > URL: https://issues.apache.org/jira/browse/HBASE-19724 > Project: HBase > Issue Type: Sub-task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Attachments: HBASE-19724.master.001.patch > > > Fix the remaining Checkstyle errors in the *hbase-hadoop2-compat* module and > enable Checkstyle to fail on violations. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19724) Fix Checkstyle errors in hbase-hadoop2-compat
[ https://issues.apache.org/jira/browse/HBASE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Hentschel updated HBASE-19724: -- Status: Patch Available (was: In Progress) > Fix Checkstyle errors in hbase-hadoop2-compat > - > > Key: HBASE-19724 > URL: https://issues.apache.org/jira/browse/HBASE-19724 > Project: HBase > Issue Type: Sub-task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Attachments: HBASE-19724.master.001.patch > > > Fix the remaining Checkstyle errors in the *hbase-hadoop2-compat* module and > enable Checkstyle to fail on violations. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Work started] (HBASE-19724) Fix Checkstyle errors in hbase-hadoop2-compat
[ https://issues.apache.org/jira/browse/HBASE-19724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-19724 started by Jan Hentschel. - > Fix Checkstyle errors in hbase-hadoop2-compat > - > > Key: HBASE-19724 > URL: https://issues.apache.org/jira/browse/HBASE-19724 > Project: HBase > Issue Type: Sub-task >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > > Fix the remaining Checkstyle errors in the *hbase-hadoop2-compat* module and > enable Checkstyle to fail on violations. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HBASE-19724) Fix Checkstyle errors in hbase-hadoop2-compat
Jan Hentschel created HBASE-19724: - Summary: Fix Checkstyle errors in hbase-hadoop2-compat Key: HBASE-19724 URL: https://issues.apache.org/jira/browse/HBASE-19724 Project: HBase Issue Type: Sub-task Reporter: Jan Hentschel Assignee: Jan Hentschel Priority: Minor Fix the remaining Checkstyle errors in the *hbase-hadoop2-compat* module and enable Checkstyle to fail on violations. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19720) Rename WALKey#getTabnename to WALKey#getTableName
[ https://issues.apache.org/jira/browse/HBASE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19720: --- Attachment: HBASE-19720.v0.patch > Rename WALKey#getTabnename to WALKey#getTableName > - > > Key: HBASE-19720 > URL: https://issues.apache.org/jira/browse/HBASE-19720 > Project: HBase > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > Attachments: HBASE-19720.v0.patch > > > WALKey is denoted as LP so its naming should obey the common rule in our > codebase. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19720) Rename WALKey#getTabnename to WALKey#getTableName
[ https://issues.apache.org/jira/browse/HBASE-19720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19720: --- Status: Patch Available (was: Open) > Rename WALKey#getTabnename to WALKey#getTableName > - > > Key: HBASE-19720 > URL: https://issues.apache.org/jira/browse/HBASE-19720 > Project: HBase > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai > Fix For: 2.0.0 > > Attachments: HBASE-19720.v0.patch > > > WALKey is denoted as LP so its naming should obey the common rule in our > codebase. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19713) Enable TestInterfaceAudienceAnnotations
[ https://issues.apache.org/jira/browse/HBASE-19713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19713: --- Attachment: HBASE-19713.branch-2.v1.patch v1 fix doc errors and checkstyle warnings. > Enable TestInterfaceAudienceAnnotations > --- > > Key: HBASE-19713 > URL: https://issues.apache.org/jira/browse/HBASE-19713 > Project: HBase > Issue Type: Task >Reporter: Chia-Ping Tsai >Assignee: Chia-Ping Tsai >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-19713.branch-2.v0.patch, > HBASE-19713.branch-2.v1.patch, HBASE-19713.v0.patch > > > Make sure TestInterfaceAudienceAnnotations pass before 2.0 release. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HBASE-19471) Fix remaining Checkstyle errors in hbase-thrift
[ https://issues.apache.org/jira/browse/HBASE-19471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16315102#comment-16315102 ] Chia-Ping Tsai commented on HBASE-19471: LGTM > Fix remaining Checkstyle errors in hbase-thrift > --- > > Key: HBASE-19471 > URL: https://issues.apache.org/jira/browse/HBASE-19471 > Project: HBase > Issue Type: Sub-task > Components: Thrift >Reporter: Jan Hentschel >Assignee: Jan Hentschel >Priority: Minor > Attachments: HBASE-19471.master.001.patch, > HBASE-19471.master.002.patch > > > Some Checkstyle errors are left in the *hbase-thrift* module and should be > fixed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HBASE-19715) Fix timing out test TestMultiRespectsLimits
[ https://issues.apache.org/jira/browse/HBASE-19715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chia-Ping Tsai updated HBASE-19715: --- Attachment: HBASE-19715.test.patch The statistic is shown below. The patch add the loop to the #testMultiLimits for more statistic. || ||master||patch|| |elapsed time|81s|75s| |total memory allocation|16.33GB|9.19GB| |char array|7.29GB|1.08GB| > Fix timing out test TestMultiRespectsLimits > --- > > Key: HBASE-19715 > URL: https://issues.apache.org/jira/browse/HBASE-19715 > Project: HBase > Issue Type: Bug >Reporter: Appy >Assignee: Appy > Attachments: HBASE-19715.test.patch, failued.txt, passed.txt, > screenshot-1.png, screenshot-2.png, screenshot-3.png, screenshot-4.png > > > !screenshot-1.png|width=800px! > Attached logs for both cases, when it passes and fails. > Link (temporary) to logs: > passed: > http://104.198.223.121:8080/job/HBase-Flaky-Tests/33449/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMultiRespectsLimits-output.txt/*view*/ > failed: > http://104.198.223.121:8080/job/HBase-Flaky-Tests/33455/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMultiRespectsLimits-output.txt/*view*/ > Correlating across more runs, whenever the tests passes, it does so within > 10-30sec of 3min deadline for medium tests. > So i think we can make it pass by just increasing the timeout. > But I'm a bit skeptical after seeing all those long GC pauses (10sec +) in > the log. Test code doesn't seem to be doing anything that intensive. Are we > mismanaging the memory somewhere? -- This message was sent by Atlassian JIRA (v6.4.14#64029)